Provide an Extended Instruction to the OpenAI Model

This use case demonstrates how to provide an array-based instruction to the OpenAI model. Two roles are specified in the input. For each role, you define different content for the OpenAI model to address.

  1. Configure a REST Adapter trigger connection.
  2. Configure an OpenAI Adapter invoke connection.
  3. Create an application integration.
  4. Drag the REST Adapter trigger connection into the integration canvas for configuration. For this example, the REST Adapter is configured as follows:
    • A REST Service URL of /extended3 is specified for this example.
    • A Method of POST is selected.
    • A Request Media Type of JSON is selected and the following sample JSON structure is specified:
      { "model" : "gpt-4o", "input" : [ { "role" : "developer", "content" : 
      "perform the openAI LLM function calling functionality described in the form of 
      text content. Response should be same as that returned for function calling" }, 
      { "role" : "user", "content" : "Define a get_weather function with parameters 
      location, which is a string object. Call the appropriate function for What is 
      the weather like in Paris today?" } ], "instructions" : "", "max_output_tokens" : 234, 
      "metadata" : null, "parallel_tool_calls" : true, "previous_response_id" : null,
       "store" : true, "stream" : false, "temperature" : 1, "tool_choice" : "auto", 
      "top_p" : 1, "truncation" : "disabled", "user" : "asdf" }
    • A Response Media Type of JSON is selected and the following sample JSON structure is specified:
      { "id" : "resp_67e6322ad4688192abad3f268b66236e05ecd4d8d90549f9", "object" : "response", 
      "created_at" : 1743139370, "status" : "completed", "error" : "df", "incomplete_details" : "asdf", 
      "instructions" : "asdf", "max_output_tokens" : 243, "model" : "gpt-4o-2024-08-06", "output" : [ { "type" : 
      "message", "id" : "msg_67e6322b18c08192a6352151edd8c9fa05ecd4d8d90549f9", "status" : "completed", "role" : 
      "assistant", "content" : [ { "type" : "output_text", "text" : "Get current temperature for a given location." } ] } ], 
      "parallel_tool_calls" : true, "previous_response_id" : "afsd", "reasoning" : { "effort" : "sdaf", "generate_summary" : 
      "asf" }, "store" : true, "temperature" : 1, "text" : { "format" : { "type" : "text" } }, "tool_choice" : "auto", "top_p" : 1, 
      "truncation" : "disabled", "usage" : { "input_tokens" : 43, "input_tokens_details" : { "cached_tokens" : 0 }, "output_tokens" : 
      68, "output_tokens_details" : { "reasoning_tokens" : 0 }, "total_tokens" : 111 }, "user" : "asdf" }
  5. Drag the OpenAI Adapter invoke connection into the integration canvas and configure it as follows.
    1. From the OpenAI LLM Models list, select the model to use (for this example, gpt-4o is selected).
    2. From the Request Type list, select Extended Prompt.
  6. In the request mapper, map the source Input element to the target Input element.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source Input element is mapped to the target Input element.

  7. In the response mapper, expand the source Response Wrapper element and target Response Wrapper element.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source Response Wrapper element and target Response Wrapper element are expanded.

  8. Perform the following mappings.

    The mapper shows the Sources section, Mapping canvas section, and Target section. The source elements are mapped to the target elements.

  9. Specify the business identifier and activate the integration.

    The completed integration looks as follows:


    The integration shows a trigger, map action, map action, invoke, and map action.

  10. From the Actions Actions icon menu, select Run.
    The Configure and run page appears.
  11. In the Body field of the Request section, enter the following text, then click Run.

    The body includes two roles (developer and user), each with their own content. The developer role takes precedence over the user role in the OpenAI hierarchy. For example, if you were to change the user role content from asking for the Boston zip code to asking for the zip code of a neighborhood in New York, the OpenAI model would not be able to answer the question.

    {
    "input":  [{
    "role": "developer",
    "content": "Give information only about Boston"
    }, {
      "role": "user",
      "content": "What is the zipcode of Beacon Hill, Boston?"
    }]
    }
    
    The Body field of the Response section returns the following output. The zip code of Beacon Hill is returned.
    {
      "output" : [ {
        "type" : "message",
        "id" : "msg_68477879290c819890e84a6f557f0b560cec1aa24c1b96c8",
        "status" : "completed",
        "role" : "assistant",
        "content" : [ {
          "type" : "output_text",
          "text" : "Beacon Hill, Boston, is primarily covered by the ZIP code 02108."
        } ]
      } ]
    }
    
  12. Expand the activity stream to view the flow of the messages sent and received.
    • Message received by the trigger connection:


      The message sent to the trigger connection is shown. Values are set for the role, content, role, and content parameters.

    • Message sent by the invoke connection to the OpenAI model:


      The invoke action is expanded to show the wire message payload sent. Values are set for the role, content, role, content, and model parameters.

    • Message received by the invoke connection from the OpenAI model:


      The invoke action is expanded to show the wire message payload received. Values are set for the type, annotation, and text parameters.