llm.generateText(options)
The content in this help topic pertains to SuiteScript 2.1.
Method Description |
Returns the response from the LLM for a given prompt. When you're using unlimited usage mode, this method accepts the OCI configuration parameters. You can also specify OCI configuration parameters on the SuiteScript tab of the AI Preferences page. For more information, see Using Your Own OCI Configuration for SuiteScript Generative AI APIs. |
Returns |
|
Aliases |
Note:
These aliases use the same parameters and can throw the same errors as the llm.generateText(options) method. |
Supported Script Types |
Server scripts For more information, see SuiteScript 2.x Script Types. |
Governance |
100 |
Module |
|
Since |
2024.1 |
Parameters
Parameter |
Type |
Required / Optional |
Description |
Since |
---|---|---|---|---|
|
string |
required |
Prompt for the LLM. |
2024.1 |
|
optional |
Chat history to be taken into consideration. |
2024.1 |
|
|
optional |
A list of documents to provide additional context for the LLM to generate the response. |
2025.1 |
|
|
string |
optional |
Specifies the LLM to use. Use values from the llm.ModelFamily enum to set the value of this parameter. If not specified, the Cohere Command R LLM is used.
Note:
JavaScript does not include an enumeration type. The SuiteScript 2.x documentation uses the term enumeration (or enum) to describe a plain JavaScript object with a flat, map-like structure. In this object, each key points to a read-only string value. |
2024.2 |
|
Object |
optional |
Parameters of the model. For more information about the model parameters, refer to the Chat Model Parameters topic in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
number |
optional |
A penalty that's assigned to a token when that token appears frequently. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
The maximum number of tokens the LLM is allowed to generate. The average number of tokens per word is 3. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. Similar to |
2024.1 |
|
number |
optional |
Defines a range of randomness for the response. A lower temperature will lean toward the highest probability tokens and expected answers, while a higher temperature will deviate toward random and unconventional responses. A lower value works best for responses that must be more factual or accurate, and a higher value works best for getting more creative responses. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
Determines how many tokens are considered for generation at each step. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
Sets the probability, which ensures that only the most likely tokens with total probability mass of |
2024.1 |
|
Object |
optional |
Configuration needed for unlimited usage through OCI Generative AI Service. Required only when accessing the LLM through an Oracle Cloud Account and the OCI Generative AI Service. SuiteApps installed to target accounts are prevented from using the free usage pool for N/llm and must use the OCI configuration. |
2024.1 |
|
string |
optional |
Compartment OCID. For more information, refer to Managing Compartments in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Endpoint ID. This value is needed only when a custom OCI DAC (dedicated AI cluster) is to be used. For more information, refer to Managing an Endpoint in Generative AI in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Fingerprint of the public key (only a NetSuite secret is accepted—see Creating Secrets ). For more information, refer to Required Keys and OCIDs in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Private key of the OCI user (only a NetSuite secret is accepted—see Creating Secrets ). For more information, refer to Required Keys and OCIDs in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Tenancy OCID. For more information, refer to Managing the Tenancy in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
User OCID. For more information, refer to Managing Users in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Preamble override for the LLM. A preamble is the initial context or guiding message for an LLM. For more details about using a preamble, refer to About the Chat Models in Generative AI (Chat Model Parameters section) in the Oracle Cloud Infrastructure Documentation.
Note:
Preambles are supported only when using the Cohere Command R ( |
2024.1 |
|
Object |
optional |
A JSON schema specifying the format of the response. Use this parameter to direct the LLM to return its response in a structured JSON format. This is useful for applications that require specific data extraction or integration with other systems. You can provide an object that represents a valid JSON schema, and the response will contain keys and values as defined in your schema that are populated by the generated content. You can then parse the response as JSON content. The following code sample shows how to use this parameter:
This parameter is supported for Cohere models only. |
2025.1 |
|
string |
optional |
Specifies the safety mode to use. Use values from the llm.SafetyMode enum to set the value of this parameter. If not specified, the |
2025.1 |
|
number |
optional |
Timeout in milliseconds, defaults to 30,000. |
2024.1 |
Errors
Error Code |
Thrown If |
---|---|
|
The |
|
Both |
|
One or more unrecognized model parameters have been used. |
|
One or more unrecognized parameters for OCI configuration have been used. |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
Documents provided using the |
|
The response from the LLM included sensitive or inappropriate content that is restricted by the specified safety mode. This error can also be thrown when using the |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The number of parallel requests to the LLM is greater than 5. |
|
The schema provided using the |
Syntax
The following code sample shows the syntax for this member. It isn't a functional example. For a complete script example, see N/llm Module Script Samples.
// Add additional code
...
const response = llm.generateText({
// preamble is optional for Cohere models
preamble: "You are a successful salesperson. Answer in an enthusiastic, professional tone.",
prompt: "Hello World!",
documents: [doc1, doc2], // create documents using llm.createDocument(options)
modelFamily: llm.ModelFamily.COHERE_COMMAND_R, // uses COHERE_COMMAND_R when modelFamily is omitted
modelParameters: {
maxTokens: 1000,
temperature: 0.2,
topK: 3,
topP: 0.7,
frequencyPenalty: 0.4,
presencePenalty: 0
},
ociConfig: {
// Replace ociConfig values with your Oracle Cloud Account values
userId: 'ocid1.user.oc1..aaaaaaaanld….exampleuserid',
tenancyId: 'ocid1.tenancy.oc1..aaaaaaaabt….exampletenancyid',
compartmentId: 'ocid1.compartment.oc1..aaaaaaaaph….examplecompartmentid',
// Replace fingerprint and privateKey with your NetSuite API secret ID values
fingerprint: 'custsecret_oci_fingerprint',
privateKey: 'custsecret_oci_private_key'
}
});
...
// Add additional code