Provide an Extended Instruction to the OpenAI Model to Use a Function

This use case demonstrates how to provide an array-based instruction to the OpenAI model. Two roles are specified in the input. For each role, you define the content. The content for one of the roles instructs the OpenAI model to invoke a weather function.

This use case implements the same integration described in Provide an Extended Instruction to the OpenAI Model. The only difference is the request payload content that you specify on the Configure and run page at runtime.
  1. Configure a REST Adapter trigger connection.
  2. Configure an OpenAI Adapter invoke connection.
  3. Create an application integration.
  4. Drag the REST Adapter trigger connection into the integration canvas and configure. For this example, it is configured as follows:
    • A REST Service URL of /extended3 is specified for this example.
    • A Method of POST is selected.
    • A Request Media Type of JSON is selected and the following sample JSON structure is specified:
      { "model" : "gpt-4o", "input" : [ { "role" : "developer", "content" : 
      "perform the openAI LLM function calling functionality described in the form of 
      text content. Response should be same as that returned for function calling" }, 
      { "role" : "user", "content" : "Define a get_weather function with parameters 
      location, which is a string object. Call the appropriate function for What is 
      the weather like in Paris today?" } ], "instructions" : "", "max_output_tokens" : 234, 
      "metadata" : null, "parallel_tool_calls" : true, "previous_response_id" : null,
       "store" : true, "stream" : false, "temperature" : 1, "tool_choice" : "auto", 
      "top_p" : 1, "truncation" : "disabled", "user" : "asdf" }
    • A Response Media Type of JSON is selected and the following sample JSON structure is specified:
      { "id" : "resp_67e6322ad4688192abad3f268b66236e05ecd4d8d90549f9", "object" : "response", 
      "created_at" : 1743139370, "status" : "completed", "error" : "df", "incomplete_details" : "asdf", 
      "instructions" : "asdf", "max_output_tokens" : 243, "model" : "gpt-4o-2024-08-06", "output" : [ { "type" : 
      "message", "id" : "msg_67e6322b18c08192a6352151edd8c9fa05ecd4d8d90549f9", "status" : "completed", "role" : 
      "assistant", "content" : [ { "type" : "output_text", "text" : "Get current temperature for a given location." } ] } ], 
      "parallel_tool_calls" : true, "previous_response_id" : "afsd", "reasoning" : { "effort" : "sdaf", "generate_summary" : 
      "asf" }, "store" : true, "temperature" : 1, "text" : { "format" : { "type" : "text" } }, "tool_choice" : "auto", "top_p" : 1, 
      "truncation" : "disabled", "usage" : { "input_tokens" : 43, "input_tokens_details" : { "cached_tokens" : 0 }, "output_tokens" : 
      68, "output_tokens_details" : { "reasoning_tokens" : 0 }, "total_tokens" : 111 }, "user" : "asdf" }
  5. Drag the OpenAI Adapter invoke connection into the integration canvas and configure it as follows.
    1. From the OpenAI LLM Models list, select the model to use (for this example, gpt-4o is selected).
    2. From the Request Type list, select Extended Prompt.
  6. In the request mapper, map the source Input element to the target Input element.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source Input element is mapped to the target Input element.

  7. In the response mapper, expand the source Response Wrapper element and target Response Wrapper element.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source Response Wrapper element and target Response Wrapper element are expanded.

  8. Perform the following mappings.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source elements are mapped to the target elements.

  9. Specify the business identifier and activate the integration.

    The completed integration looks as follows:


    The integration shows a trigger, map action, map action, invoke, and map action.

  10. From the Actions Actions icon menu, select Run.
    The Configure and run page appears.
  11. In the Body field of the Request section, enter the following text, then click Run.

    The body includes two roles (developer and user), each with their own content. The user content requests the OpenAI model to perform a get_weather function.

    {
      "instructions": "",
      "metadata": null,
      "store": true,
      "top_p": 1,
      "input": [{
        "role": "developer",
        "content": "perform the openAI LLM function calling functionality described 
    in the form of text content. Response should be same as that returned for function 
    calling"
      }, {
        "role": "user",
        "content": "Define a get_weather function with parameters location, which is a 
    string object. Call the appropriate function for What is the weather like in Paris today?"
      }],
      "previous_response_id": null,
      "parallel_tool_calls": true,
      "stream": false,
      "temperature": 1,
      "tool_choice": "auto",
      "model": "gpt-4o",
      "truncation": "disabled",
      "user": "asdf",
      "max_output_tokens": 234
    }
    The Body field of the Response section returns the following output:
    {
      "output" : [ {
        "type" : "message",
        "id" : "msg_684601cbd75c819b85e9067ac59631ec07e1a9c0b54af34a",
        "status" : "completed",
        "role" : "assistant",
        "content" : [ {
          "type" : "output_text",
          "text" : "Here's how you can define the `get_weather` function and perform 
    the function call for the weather in Paris:\n\n```python\ndef 
    get_weather(location: str):\n    # Mock implementation of weather checking.\n    
    return {\"location\": location, \"forecast\": \"sunny\", \"temperature\": \"15°C\"}\n\n# 
    Call the function\nget_weather(\"Paris\")\n```\n\nFunction call output:\n```json\n{\n  
    \"location\": \"Paris\",\n  \"forecast\": \"sunny\",\n  \"temperature\": \"15°C\"\n}\n```"
        } ]
      } ]
    }
  12. Expand the activity stream to view the flow of the messages sent and received.
    • Message sent to the trigger connection:


      The message sent to the trigger connection is shown. Values are set for the role, content, role, and content parameters.

    • Message sent by the invoke connection to the OpenAI model:


      The invoke action is expanded to show the wire message payload sent. Values are set for the input, role, content, role, and content parameters.

    • Message received by the invoke connection from the OpenAI model:


      The invoke action is expanded to show the wire message payload received. Values are set for the type, annotation, text, and temperature parameters.

    • Message reply to the trigger connection:


      The reply payload wire message to the trigger connection is shown. Values are set for the output, type, id, status, role, content, type, and text parameters.