N/llm Module
The content in this help topic pertains to SuiteScript 2.1.
The N/llm module supports generative artificial intelligence (AI) capabilities in SuiteScript. You can use this module to send requests to the large language models (LLMs) supported by NetSuite and receive responses to use in your scripts.
If you're new to using generative AI in SuiteScript, see SuiteScript 2.x Generative AI APIs. That topic contains essential information about this feature.
The following list summarizes the main features that are available using the N/llm module:
-
Content generation – You can request generative AI content from a supported LLM using llm.generateText(options). You can provide a prompt that describes the content you want to generate, and the module sends the request to the Oracle Cloud Infrastructure (OCI) Generative AI service to generate a response.
-
Prompt evaluation – If you use Prompt Studio to manage existing prompts in your NetSuite account, you can use llm.evaluatePrompt(options) to send a prompt from Prompt Studio to the LLM for evaluation. This method uses the information from the prompt definition in Prompt Studio (such as the model and model parameters), and it lets you provide values for any variables the prompt uses before sending it for evaluation. For more information about Prompt Studio, see Prompt Studio.
-
Prompt and Text Enhance action management – By using the N/record module, you can create, update, and delete prompts and Text Enhance actions in your scripts. For more information, see Managing Prompts and Text Enhance Actions Using the N/llm Module.
-
Retrieval-augmented generation (RAG) support – You can give source documents to the LLM when calling llm.generateText(options). The LLM uses information from the source documents to augment its response. The LLM also returns citations that identify which source documents it used. For an example of how to implement a RAG use case using the N/llm module, see Provide Source Documents When Calling the LLM.
-
Embedding support: The llm.embed(options) method converts text to vector embeddings. Your SuiteScript applications can use vector embeddings for use cases such as semantic searches, recommender systems, text classification, or text clustering. For an example of how to generate and use embeddings, see Find Similar Items Using Embeddings. For more information about embedding models, refer to About the Embedding Models in Generative AI in the Oracle Cloud Infrastructure Documentation.
Embedding methods have their own monthly free usage quota, separate from generate methods. If you use both in a month, you'll see two rows for the month, labeled by usage type. For more information, see View SuiteScript AI Usage Limit and Usage.
-
Streaming support – Using the llm.generateTextStreamed(options) and llm.evaluatePromptStreamed(options) methods, your code gets content as the LLM generates it, instead of waiting to receive it all at the same time. For an example how to work with streamed content, see Receive a Partial Response from the LLM.
-
Method aliases – You can use the following aliases in your code instead of the method names:
-
llm.chat(options)
is an alias for llm.generateText(options). -
llm.executePrompt(options)
is an alias for llm.evaluatePrompt(options). -
llm.chatStreamed(options)
is an alias for llm.generateTextStreamed(options). -
llm.executePromptStreamed(options)
is an alias for llm.evaluatePromptStreamed(options).
Promise versions are also available for these methods.
When aliases are available for methods, you'll see them listed in the main table of the method's help topic. For an example, see llm.generateTextStreamed(options).
-
As you work with this module, keep the following considerations in mind:
-
Generative AI features, such as the N/llm module, use creativity in their responses. Make sure you validate the AI-generated responses for accuracy and quality. Oracle NetSuite isn't responsible or liable for the use or interpretation of AI-generated content.
-
SuiteScript Generative AI APIs (N/llm module) are available only for accounts located in certain regions. For a list of these regions, see Generative AI Availability in NetSuite.
In This Help Topic
N/llm Module Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Object |
Object |
Server scripts |
The chat message. |
|
Object |
Server scripts |
A citation returned from the LLM. |
||
Object |
Server scripts |
A document to be used as source content when calling the LLM. |
||
Object |
Server scripts |
The embeddings response returned from the LLM. |
||
Object |
Server scripts |
The response returned from the LLM. |
||
Object |
Server scripts |
The streamed response returned from the LLM. |
||
Method |
Object |
Server scripts |
Creates a chat message based on a specified role and text. |
|
Object |
Server scripts |
Creates a document to be used as source content when calling the LLM. |
||
Object |
Server scripts |
Returns the embeddings from the LLM for a given input. |
||
void |
Server scripts |
Asynchronously returns the embeddings from the LLM for a given input. |
||
Object |
Server scripts |
Takes the ID of an existing prompt and values for variables used in the prompt and returns the response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
void |
Server scripts |
Takes the ID of an existing prompt and values for variables used in the prompt and asynchronously returns the response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
Object |
Server scripts |
Takes the ID of an existing prompt and values for variables used in the prompt and returns the streamed response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
void |
Server scripts |
Takes the ID of an existing prompt and values for variables used in the prompt and asynchronously returns the streamed response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
Object |
Server scripts |
Takes a prompt and parameters for the LLM and returns the response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
void |
Server scripts |
Takes a prompt and parameters for the LLM and asynchronously returns the response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
Object |
Server scripts |
Takes a prompt and parameters for the LLM and returns the streamed response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
void |
Server scripts |
Takes a prompt and parameters for the LLM and asynchronously returns the streamed response from the LLM. When you're using unlimited usage mode, this method also accepts the OCI configuration parameters. |
||
number |
Server scripts |
Returns the number of free requests in the current month. |
||
void |
Server scripts |
Asynchronously returns the number of free requests in the current month. |
||
number |
Server scripts |
Returns the number of free embeddings requests in the current month. |
||
void |
Server scripts |
Asynchronously returns the number of free embeddings requests in the current month. |
||
Enum |
enum |
Server scripts |
Holds the string values for the author (role) of a chat message. Use this enum to set the value of the |
|
enum |
Server scripts |
The large language model to be used to generate embeddings. Use this enum to set the value of the |
||
enum |
Server scripts |
Holds the string values for the large language model to be used. Use this enum to set the value of the |
||
enum |
Server scripts |
The truncation method to use when embeddings input exceeds 512 tokens. Use this enum to set the value of the |
ChatMessage Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
string |
Server scripts |
The author (role) of the chat message. |
|
string |
Server scripts |
Text of the chat message. This text can be either the prompt sent by the script or the response returned by the LLM. |
Citation Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
string[] |
Server scripts |
The IDs of the documents where the cited text is located. |
|
number |
Server scripts |
The ending position of the cited text. |
||
number |
Server scripts |
The starting position of the cited text. |
||
string |
Server scripts |
The cited text from the documents. |
Document Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
string |
Server scripts |
The content of the document. |
|
string |
Server scripts |
The ID of the document. |
EmbedResponse Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
number[] |
Server scripts |
The embeddings returned from the LLM. |
|
string[] |
Server scripts |
The list of inputs used to generate the embeddings response. |
||
string |
Server scripts |
The model used to generate the embeddings response. |
Response Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
Server scripts |
List of chat messages. |
||
Server scripts |
List of citations used to generate the response. |
|||
Server scripts |
List of documents used to generate the response. |
|||
string |
Server scripts |
Model used to produce the LLM response. |
||
string |
Server scripts |
Text returned by the LLM. |
||
Server scripts |
Token usage for a request to the LLM. |
StreamedResponse Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
Server scripts |
List of chat messages. |
||
Server scripts |
List of citations used to generate the streamed response. |
|||
Server scripts |
List of documents used to generate the streamed response. |
|||
string |
Server scripts |
Model used to produce the streamed response. |
||
string |
Server scripts |
Text returned by the LLM. |
Usage Object Members
Member Type |
Name |
Return Type / Value Type |
Supported Script Types |
Description |
---|---|---|---|---|
Property |
number |
Server scripts |
The number of tokens in the response from the LLM. |
|
number |
Server scripts |
The number of tokens in the request to the LLM. |
||
number |
Server scripts |
The total number of tokens for the entire request to the LLM. |