Introduction

This 15-minute tutorial shows you how to integrate generative AI into your skill by connecting it to a large language model, or LLM. An LLM is an AI system that has been trained on vast amounts of text data. These models can be trained on various domains to perform a variety of tasks, including audience-specific text summarization and sentiment analysis of chat histories.

By integrating your skill with an LLM, you enable it to not only field a range of user input, but also to respond with context-appropriate answers in a human-like tone. To help the LLM predict the most likely words or phrases for its responses, you send it the appropriate context and instructions in a block of text known as a prompt. In response, the LLM generates a completion, a sequence of words or phrases that it believes are the most probable continuation of the prompt.

For this tutorial, you're going to integrate a skill with the Azure OpenAI model so that it can send a prompt to the model requesting it to evaluate snippets of skill-user chat histories for positive, negative, and neutral sentiments.

Objectives

  • Create a REST service to the Azure OpenAI service provider.
  • Create an event handler that transforms the REST payloads to and from the format used by Oracle Digital Assistant.
  • Add a component to the dialog flow to connect users to the Azure OpenAI model via the REST service.
  • Engineer the prompt that sends instructions to the Azure OpenAI model.
  • Test the prompt.

Prerequisites

  • A REST service for the Azure OpenAI model. This includes:
    • A POST endpoint.
    • An API key and value.
    • Sample provider-specific request and static response payloads
  • Access to Oracle Digital Assistant

Task 1: Create the REST Service to the Model

Your first task in integrating your skill with an LLM model is to add a REST service to your instance that calls the model's provider. For this tutorial, we're using Azure OpenAI as an example, but you can you use a REST service for any LLM. If an LLM REST service has already been configured for your instance, or if you're taking this tutorial in a lab setting where this service has been provided, then take note of the REST service name and then move on to the next step where you create the skill.

  1. With the Oracle Digital Assistant UI open in your browser, click main menu icon to open the side menu.
  2. Expand Settings, then select API Services.
  3. The API Services menu item
  4. Open the LLM Services tab.
  5. The LLM Services tab
  6. Click +Add LLM Service.
  7. Complete the LLM Service Service dialog to create a POST operation to the provider's endpoint:
    • Name: Enter an easily identifiable name for the service. You'll reference this name later on.
    • Endpoint: Copy and and paste the Azure Open AI's endpoint, For example:
      https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}

      Note:

      As noted in the above example, the endpoint must specify a completions operation to enable the model to generate more than one completion, the text results for the provided prompt.
    • Methods: Select POST.
  8. Click Create.
  9. The Create LLM Service dialog
    Description of the illustration create_rest_service_dialog.png
  10. Complete the service by adding the API key, and the request and response payload samples:
    • Authentication Type: Select API Key. Then copy and paste the key and value.
    • Include As: Select Query Parameters.
    • Key and Value: Paste the Azure OpenAI API key.
    • POST request: Select application/json as the Content Type.
    • Body: Add the payload sent to the request. For example:
      {
          "model": "gpt-4-0314",
          "messages": [
              {
                  "role": "system",
                  "content": "Tell me a joke"
              }
          ],
          "max_tokens": 128,
          "temperature": 0,
          "stream": false
      }
    • Static Response: Select 200-OK. Then add the sample payload body for the fallback response. For example:
      {
          "created": 1685639351,
          "usage": {
              "completion_tokens": 13,
              "prompt_tokens": 11,
              "total_tokens": 24
          },
          "model": "gpt-4-0314",
          "id": "chatcmpl-7Mg5PzMSBNhnopDNo3tm0QDRvULKy",
          "choices": [
              {
                  "finish_reason": "stop",
                  "index": 0,
                  "message": {
                      "role": "assistant",
                      "content": "Why don't scientists trust atoms? Because they make up everything!"
                  }
              }
          ],
          "object": "chat.completion"
      }
  11. Click Test Request to check for a 200 response.

Task 2: Create a Skill

With the LLM Provider REST Service added to the instance, you now need to create skill that can call this service and connect users to it through its dialog flow definition.

To create this skill:

  1. With the Oracle Digital Assistant UI open in your browser, click main menu icon to open the side menu.
  2. Click Development and then select Skills.
  3. A description of this image follows.
  4. Click main menu icon again to collapse the side menu.
  5. Click the + New Skill button.

  6. A screenshot of the New Skill button.

    The Create Skill dialog appears.

  7. Add a name in the Display Name field.
  8. For the other fields, leave the default values. Note that the Dialog Mode is Visual.
  9. A description of this image follows.
    Description of the illustration create_skill.png
  10. Click Create.

Task 3: Connect the Skill to the Model

We're now going to enable the skill to access the LLM REST service by creating a custom component with an event handler that transforms the REST payloads into formats that are accepted by both the LLM provider and Oracle Digital Assistant.

Complete the following steps:

  1. Click Components The Components icon in the left navbar.
  2. The Components icon in the left navbar
  3. Click Add Service.
  4. In the Create Service dialog:
    • Enter a name that describes the service.
    • Accept the default setting, Embedded Container.
    • Select New Component.
    • Select LLM Transformation from the Component Type drop down list.
    • The Component Type dropdown menu in the Create Service dialog
    • Enter a descriptive name in the Component Name field.
    • Select Custom (located under other) from the Template drop down list.
  5. Click Create to add your component to the Components page.
  6. The Create Service Dialog
    Description of the illustration create_rest_service_event_handler_dialog.png
    The completed component displays in the Components page.
    The Services page
    Description of the illustration services_page.png
  7. Select the component in the component page to check its deployment status. When Ready displays, you can move on to the next step.
  8. The deployment status indicator message.

    Ensure that Service Enabled (the default setting) is switched on.

    The Service Enabled option, switched on

Task 4: Map the LLM Service Provider and Oracle Digital Assistant Requests and Responses

The skill's requests to the model's service provider need to be transformed from the interface used by Oracle Digital Assistant, which is known as Common LLM Interface (CLMI) into the format that's accepted by the service provider. Likewise, the results returned from the service provider also need to be transformed into CLMI. To enable this mapping, the following REST service event handler methods must have provider-specific code:

  • transformRequestPayload
  • transformResponsePayload
  • transformErrorResponsePayload

Note:

Each provider has its own specific format, but for this tutorial, we're using Azure OpenAI-specific handler code.

In this step, we're going add these transformation methods by updating the placeholder code in the event handler code editor.

The Event Handler Template
Description of the illustration handler_template.png

To open the event handler code editor and update the transformation code (in this case, for Azure OpenAI):

  1. Expand the service. Then select the event handler.
  2. The component service, expanded
  3. Click Edit The Edit Component code icon (located at the upper right) to open the editor.
  4. Replace the transformRequestPayload handler event method code (around Lines 24-26) with the following:
    transformRequestPayload: async (event, context) => {
      let payload = { "model": "gpt-4-0314",
                        "messages": event.payload.messages.map(m => { return {"role": m.role, "content": m.content}; }),
                        "max_tokens": event.payload.maxTokens,
                        "temperature": event.payload.temperature,
                        "stream": event.payload.streamResponse
                      };
      return payload;
    },
  5. Replace the transformResponsePayload event method (around Lines 35-37) with the following:
    transformResponsePayload: async (event, context) => {
         let llmPayload = {};      
         if (event.payload.responseItems) {
           // streaming case
           llmPayload.responseItems = [];
           event.payload.responseItems
               .filter(item => item.choices.length > 0)
               .forEach(item => {
             llmPayload.responseItems.push({"candidates": item.choices.map( c => {return {"content": c.delta.content || "" };})});
           });
         } else {
            // non-streaming case
            llmPayload.candidates = event.payload.choices.map( c => {return {"content": c.message.content || "" };});
         } 
         return llmPayload;
       },
  6. Replace the transformErrorResponsePayload event method (around Lines 47-49) with the following:
    transformErrorResponsePayload: async (event, context) => {
      let errorCode = 'unknown';
      if (event.payload.error) {
        if ( 'context_length_exceeded' === event.payload.error.code) {
          errorCode = 'modelLengthExceeded';
        }  else if ('content_filter' === event.payload.error.code) {
          errorCode = 'flagged'; 
        } 
        return {"errorCode" : errorCode, "errorMessage": event.payload.error.message};
      } else {
        return {"errorCode" : errorCode, "errorMessage": JSON.stringify(event.payload)};
      }   
    }
  7. Check the code syntax by clicking Validate. You can find the complete code here. Use it to replace the code in the editor if you're encountering syntax errors that you can't fix.

    Note:

    If you use this code, be sure to replace the name property in the metadata section with the component name that you added in Task 3: Connect the Skill to the Model.
  8. Click Save, then Close. Wait for the deployment to complete. When Ready displays, you can move on to the next step.
  9. The deployment status indicator message.

Task 5: Define the LLM Service for the Skill

To enable the skill to connect users to the model through the dialog flow, you need to create an LLM service that combines the instance-wide LLM service that calls the model with the transformation event handler (which in this case is for Azure OpenAI).

  1. Click Settings The Settings icon in the left navbar.
  2. The Settings icon in the left navbar
  3. Open the Configuration page.
  4. The Configuration tab in Settings
  5. In the Large Language Models Services section (located near the bottom of the page), click +New LLM Service.
  6. The Large Language Model Section of the Configuration page.
    Description of the illustration large_language_models_settings_page.png
  7. Complete the following fields:
    • Name: Enter an easily identifiable name for the LLM service. You'll reference this name when you build the dialog flow in the next step.
    • LLM Service: Select the name of the instance-wide LLM service that you created in Task 1: Create the LLM Service for the Model.
    • Transformation Handler: Select the name of the event handler component that you created as part of the REST service in Task 3: Connect the Skill to the Model.
    • Leave the remaining properties in their default settings. Note that Default is switched on (true) if this is the only service that you've created so far for this tutorial.

      Important:

      Be sure that Mock is switched off.
    • The Large Language Models section of the Settings page with values.
      Description of the illustration llm_service_completed.png
  8. Click Save (located at the right).
  9. The Save service icon.

Task 6: Integrate the Service

Now that the skill is connected to the LLM, you're now going to connect your skill's users to the model by creating a dialog flow component that can call the model and tell it what to do. The component conveys these instructions using a prompt, which is a block of human-readable text. In this step, you'll provide this prompt, which instructs the model on evaluating user feedback as positive or negative, or neutral.

  1. Click Flows The Flows icon in the left navbar.
  2. The Flows icon in the left navbar
  3. Select unresolvedIntent.
  4. Unresolved flow
  5. In the unresolvedMessage state, click The menu icon and then select Add State from the menu.
  6. The Add State option
  7. Select Service Integration.
  8. Select Invoke Large Language Model.
  9. Enter a description like Sentiment analysis. Then click Insert.

    Note:

    As a best practice, always add descriptions to the invokeLLM states.
  10. The Add State dialog with Invoke Large Language Model selected
    Description of the illustration add_state_dialog_invoke_large_language_model.png

    The dialog flow now includes the invokeLLM state and the showLLMError state. For this tutorial, we're going to focus on the invokeLLM state only.

    The invokeLLM state with the showError state in the dialog flow
    Description of the illustration add_state_dialog_invoke_large_language_model.png
  11. Open the invokeLLM state. In the Component page, select the LLM service that you created from the last step.

    Note:

    If you've only created single LLM service so far for this tutorial, then you can select Default.
  12. The LLM Service field
    Description of the illustration llm_component_select_service.png
  13. Add prompt that sends instructions to the LLM service by pasting the following into the Prompt field. We'll go over the mechanics of this prompt in the next step, but you may want to scan through it here to familiarize yourself with it.
    You're a sentiment analysis bot. Given a chat snippet between a customer and a bot, you must classify this input into one of the following labels:
    
    1. Positive, thanks for reaching out to us
    
    2. Negative, as a compensation, you will get $200
    
    3. Neutral, we will keep improving our service quality
    
     
    
    Follow these instructions strictly:
    
    - For a given customer chat snippet, do not generate any output other than one of the 3 labels defined above
    
    - If you are given any input other than a customer chat snippet, respond with "Sorry, that does not seem like a valid chat snippet!"
    
    - Pay attention to the satisfaction level of the customer in the conversation
    
    - If there is insufficient information in the chat input, respond with label 3: "Neutral, we will keep improving our service quality"
    
    
    Below are some examples
    
    
    Input:
    
    
    customer: Hi, how can I update my password?
    
    bot: Sure, click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: Okay, thanks
    
    
    Label: Neutral, we will keep improving our service quality
    
    ---
    
    Input:
    
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: I have trouble in finding how to reset password, I'm super upset
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that icon is too tiny and it's extremely hard to find, it's terrible design
    
    
    Label: Negative, as a compensation, you will get $200
    
    ---
    
    Input:
    
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: Can you tell me how do I change my password?
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that was easy. The buttons on the interface are really helpful and clear.
  14. The prompt text
    Description of the illustration enter_positive_prompt.png

Task 7: Test the Prompt

Before we test the prompt that you added in the previous step, let's take a quick look at this prompt to find out why it demonstrates some of the best practices for writing LLM prompts.

This is prompt reflects good prompt design because:

  • It assigns a persona to the LLM that is use case-specific:
    You're a sentiment analysis bot.
  • It provides brief and concise instructions:
    Given a chat snippet between a customer and a bot, you must classify this input into one of the following labels:
    
    1. Positive, thanks for reaching out to us
    
    2. Negative, as a compensation, you will get $200
    
    3. Neutral, we will keep improving our service quality
  • It defines clear acceptance criteria:
    Follow these instructions strictly:
    
    - For a given customer chat snippet, do not generate any output other than one of the 3 labels defined above
    
    - If you are given any input other than a customer chat snippet, respond with "Sorry, that does not seem like a valid chat snippet!"
    
    - Pay attention to the satisfaction level of the customer in the conversation
    
    - If there is insufficient information in the chat input, respond with label 3: "Neutral, we will keep improving our service quality"
  • It uses few-shot learning, which means that it provides a small set of examples to the LLM on how it should respond to the tone of the user input. To do this, the prompt text includes sets of Input and Label pairs that associate typical user-skill interactions with the expected classification:
    Input:
    
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: I have trouble in finding how to reset password, I'm super upset
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that icon is too tiny and it's extremely hard to find, it's terrible design
    
    
    Label: Negative, as a compensation, you will get $200

The prompt includes one last Input block, but it's not paired with Label. Rather than tell the LLM how to classify this input, it instead tests it based on the provided instructions and examples. In this case, the following input should elicit a positive result (Positive, thanks for reaching out to us):
Input:


bot: Thanks for reaching out to us, how can I help you?

customer: Can you tell me how do I change my password?

bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.

customer: oh, that was easy. The buttons on the interface are really helpful and clear.

Note:

You can also test prompts by passing in parameters holding mock data, but for the sake of expediency, we're going to use an Input block that's part of the prompt.

To find out if the model returns the expected result based on the prompt criteria:

  1. In the Component page, click Build Prompt to open the Prompt Builder.
  2. The Test Prompt button

    Writing prompts is an iterative process. In fact, continually refining your prompt is a best practice. It may take several revisions before a prompt returns the results that you expect. To help you through this revision cycle, you can use the Prompt Builder to incrementally test and modify your prompt until it functions properly. For the remainder of this tutorial, you are going to use the Prompt Builder to update and test your prompt.

    The Test Prompt button in the LLM Prompt Tester
    Description of the test_prompt_llm_prompt_tester.png
  3. Click Generate Output.
  4. The Generate Output button
  5. Verify that the LLM service returns the expected positive result in the LLM Output field per the instructions in the prompt text.
  6. The Prompt Tester
    Description of the illustration prompt_tester_positive_result.png
  7. Replace the text in the Prompt field with the following:
    You're a sentiment analysis bot. Given a chat snippet between a customer and a bot, you must classify this input into one of the following labels:
    
    1. Positive, thanks for reaching out to us
    
    2. Negative, as a compensation, you will get $200
    
    3. Neutral, we will keep improving our service quality
    
     
    
    Follow these instructions strictly:
    
    - For a given customer chat snippet, do not generate any output other than one of the 3 labels defined above
    
    - If you are given any input other than a customer chat snippet, respond with "Sorry, that does not seem like a valid chat snippet!"
    
    - Pay attention to the satisfaction level of the customer in the conversation
    
    - If there is insufficient information in the chat input, respond with label 3: "Neutral, we will keep improving our service quality"
    
    
    Below are some examples
    
    Input:
    
    customer: Hi, how can I update my password?
    
    bot: Sure, click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: Okay, thanks
     
    
    Label: Neutral, we will keep improving our service quality
    
    ---
    
    Input:
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: Can you tell me how do I change my password?
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that was easy. The buttons on the interface are really helpful and clear.
    
    Label: Positive, thanks for reaching out to us
    
    
    ---
    
    Input:
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: I have trouble in finding how to reset password, I'm super upset
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that icon is too tiny and it's extremely hard to find, it's terrible design
  8. This time, the prompt expects the model to return a negative result (Negative, as a compensation, you will get $200) because of the following input:
    Input:
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: I have trouble in finding how to reset password, I'm super upset
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that icon is too tiny and it's extremely hard to find, it's terrible design
    
    Click Test Prompt once again and then verify the negative result in the LLM Output field.
  9. The Prompt Tester
    Description of the illustration prompt_tester_negative_result.png
  10. Test out the neutral result (Neutral, we will keep improving our service quality) by first pasting in the following, then by clicking Generate Output.
    You're a sentiment analysis bot. Given a chat snippet between a customer and a bot, you must classify this input into one of the following labels:
    
    1. Positive, thanks for reaching out to us
    
    2. Negative, as a compensation, you will get $200
    
    3. Neutral, we will keep improving our service quality
    
     
    
    Follow these instructions strictly:
    
    - For a given customer chat snippet, do not generate any output other than one of the 3 labels defined above
    
    - If you are given any input other than a customer chat snippet, respond with "Sorry, that does not seem like a valid chat snippet!"
    
    - Pay attention to the satisfaction level of the customer in the conversation
    
    - If there is insufficient information in the chat input, respond with label 3: "Neutral, we will keep improving our service quality"
    
    
    Below are some examples
    
     
    Input:
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: I have trouble in finding how to reset password, I'm super upset
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that icon is too tiny and it's extremely hard to find, it's terrible design
    
    
    Label: Negative, as a compensation, you will get $200
    
    ---
    
    Input:
    
    
    bot: Thanks for reaching out to us, how can I help you?
    
    customer: Can you tell me how do I change my password?
    
    bot: Sure, you can click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: oh, that was easy. The buttons on the interface are really helpful and clear.
    
    
    Label: Positive, thanks for reaching out to us 
    
    
    Input: 
    
    customer: Hi, how can I update my password?
    
    bot: Sure, click the small icon on the left top of your window, and go to account settings, there you will see the reset password action button.
    
    customer: Okay, thanks
  11. Check for the neutral response.
  12. Part of the rules set down by the prompt dictate that the input must be a snippet of a conversation between a bot and a customer:
    If you are given any input other than a customer chat snippet, respond with "Sorry, that does not seem like a valid chat snippet!".
    In this last step, you're going to test this rule by replacing the testing Input block at the end of the prompt with the following:
    Input:
    
    How's the weather?
  13. The Prompt Tester
    Description of the illustration prompt_tester_not_valid_result.png
  14. Check for Sorry, that does not seem like a valid chat snippet! in the LLM Output field. Then click Close The Close button.

    Note:

    Clicking Save Settings overwrites your original prompt.

Congratulations! You've successfully created a skill that interacts with an LLM.

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.