Tooling in the N/llm Module
The content in this help topic pertains to SuiteScript 2.1.
Tooling in the N/llm module lets you extend large language model (LLM) interactions by defining custom tools. These tools can run business-specific logic (such as SuiteQL queries), receive inputs from the LLM, and return structured outputs. The LLM can request tool calls as needed and integrate tool results into its generated responses, which can provide richer and more dynamic responses.
When you call llm.generateText(options), you can provide a set of available tools using the options.tools parameter. The LLM uses the tool descriptions you provide to decide whether calling a tool would enhance its response. When the LLM detects that a tool should be called (based on the prompt and tools you provide, as well as the conversation history, if available), the response from llm.generateText(options) includes a list of tool call requests in the Response.toolCalls property. Your script can process each request and call the corresponding tool, then provide the tool results in a follow-up call to llm.generateText(options). The LLM uses these results to further inform its response, and it may return more tool call requests in its next response. Your script can repeat this process until the LLM has generated a final, complete response with no additional tool call requests.
The llm.generateTextStreamed(options) method also supports tools. When using this method, the list of tool call requests from the LLM is included in the StreamedResponse.toolCalls property.
To use tooling, your script should do the following:
-
Create tool definitions (and associated tool handlers) that define the available tools the LLM can request. See Creating Tool Definitions.
-
Provide the set of available tools to the LLM along with your prompt. See Providing Tools to the LLM.
-
Call any tools that the LLM requests, format the tool results appropriately, and send the results back to the LLM. See Providing Tool Results to the LLM.
Creating Tool Definitions
A tool definition is a llm.Tool object that describes what the tool does and what parameters it expects. Use the llm.createTool(options) and llm.createToolParameter(options) methods to define each tool and its parameters.
The following code sample shows how to define a tool that looks up a user ID based on the user's name:
const findUserIdTool = llm.createTool({
name: "findUserId",
description: "Looks up user ID based on user name",
parameters: [
llm.createToolParameter({
name: "userName",
description: "Name of the user",
type: llm.ToolParameterType.STRING
})
]
});
It's important to use a meaningful name and description for your tool. The LLM uses these values to determine whether to request a tool call to augment its response.
You can define as many tools as needed. You can collect your tool definitions into an object or array for convenience, as the following code sample shows:
const TOOL_DEFINITIONS = {
'findUserId': findUserIdTool,
// Add more tools as needed
};
Each tool definition should have an associated tool handler in your script. A tool handler is the function that performs the tool logic, such as running a SuiteQL query, looking up data, or any other required action. The handler receives the parameters the LLM provides, runs your logic, and returns the result.
The following code sample shows an example tool handler for the preceding findUserId tool definition. Similar to tool definitions, you can collect your tool handlers into an object or array for convenience:
const TOOL_HANDLERS = {
findUserId: (options) => {
const res = query.runSuiteQL({
query: "SELECT id from entity where fullname=?",
params: [options.userName]
}).asMappedResults();
return res.length === 0 ? -1 : res[0]["id"];
}
// Add more handlers as needed
};
Using this pattern, a tool handler has the same name as its corresponding tool definition. The pattern for defining handlers is flexible, so you can adapt it as needed to meet your business requirements.
When creating tool handlers using this pattern, keep the following considerations in mind:
-
Parameter names in the tool handler must match those defined in the tool definition. For example, in the preceding code samples, the
userNameparameter is used in both the tool definition and its handler. -
The tool handler should return results in a simple format (such as a primitive type) that can be sent back to the LLM.
-
Your script should handle errors gracefully (as shown by returning
-1in the preceding code sample when results aren't found). -
If a tool handler accesses or modifies NetSuite data, be sure to implement proper validation to prevent unintended operations or unauthorized access.
Providing Tools to the LLM
After you create your tool definitions using llm.createTool(options), you need to make them available to the LLM during response generation. To do so, pass your tool definitions to llm.generateText(options) as an array using the options.tools parameter. This parameter indicates which tool definitions the LLM can use as it generates its response. You can provide multiple tools per call, and you can provide a conversation history for more context-aware interactions.
The following code sample shows how to provide a set of tools (as llm.Tool objects collected into an object called TOOL_DEFINITIONS) to llm.generateText(options):
const tools = Object.values(TOOL_DEFINITIONS);
const llmResult = llm.generateText({
prompt: "How many times did John Smith purchase Widget 123?",
tools: tools,
chatHistory: chatHistory
});
When the LLM receives the prompt and set of tools, it uses the names and descriptions of the tools to determine if using any of them would improve its response. If so, the LLM generates tool call requests (as llm.ToolCall objects) that are included in the Response.toolCalls property of the response. Your script processes these tool call requests, runs the corresponding handler functions, and provides the results back to the LLM using the options.toolResults parameter in a follow-up call to llm.generateText(options). The LLM uses these results to generate a more informed and relevant answer.
When providing tools to the LLM, keep the following considerations in mind:
-
Make sure that the tool definitions you pass to llm.generateText(options) match the tool handler implementations in your script.
-
Only provide tool definitions that are safe and appropriate for the current user context, and that are relevant to your current use case. Providing an unnecessarily large set of tools can affect LLM performance and the clarity of the response.
-
Update the set of tools for each call or session as appropriate. This approach lets you provide a more dynamic and context-sensitive set of tools based on your use case.
Providing Tool Results to the LLM
After you provide tools to the LLM using llm.generateText(options), inspect the results for any tool call requests. Here's an example:
const llmResult = llm.generateText({ prompt, tools, chatHistory });
if (llmResult.toolCalls.length > 0) {
// Tool calls need to be processed
}
A tool call request is represented by a llm.ToolCall object contained in the Response.toolCalls property. Each object includes the requested tool's name and the parameters for that tool, which corresponds to your tool definition. Your script should run the logic associated with each tool call using the corresponding tool handler.
After your tool handler generates the results, use llm.createToolResult(options) to package the results in a format that you can send back to the LLM. The following code sample shows an example of how to do this (if your tool handlers are collected into an object called TOOL_HANDLERS):
const toolResults = [];
for (const call of llmResult.toolCalls) {
// Run your handler to get the result based on the call parameters
const resultValue = TOOL_HANDLERS[call.name](call.parameters);
// Package the output using llm.createToolResult()
const toolResult = llm.createToolResult({
call,
outputs: [{ result: resultValue }]
});
toolResults.push(toolResult);
}
In this example, the call parameter refers to the originating llm.ToolCall object, and the outputs array contains the results returned by the tool handler.
To provide the results to the LLM so it can complete its response, make a follow-up call to llm.generateText(options) with the tools and tool results (and conversation history, if available):
const llmFollowUpResult = llm.generateText({
tools,
toolResults: toolResults,
chatHistory: llmResult.chatHistory
});
When you provide tool results using the toolResults parameter, you don't need to provide a prompt. The LLM considers only the set of available tools and the provided tool results (and conversation history, if available) when it generates its next response. The LLM uses the provided tool results to generate a new, richer response. If the new response includes additional tool call requests, repeat the process as needed until no further tool calls are requested.
When providing tool results to the LLM, keep the following considerations in mind:
-
Make sure your tool results are well formed and include all required output data for the LLM to use effectively.
-
Use a loop to continue providing tool results to the LLM until the response contains no further tool call requests.
-
If using a conversation history, always provide an updated
chatHistoryparameter between calls to maintain continuity and context. -
If a tool encounters an error or returns no data, be sure to handle it gracefully (for example, by returning a specific value or flag).
-
Make sure your tool results don't expose data to unauthorized users or break compliance policies.