N/llm Module Script Samples

Note:

The content in this help topic pertains to SuiteScript 2.1.

The following script samples demonstrate how to use the features of the N/llm module.

Send a Prompt to the LLM and Receive a Response

The following sample sends a "Hello World" prompt to the default NetSuite large language model (LLM) and receives the response. It also shows the remaining free usage for the month.

For instructions about how to run a SuiteScript 2.1 code snippet in the debugger, see On-Demand Debugging of SuiteScript 2.1 Scripts. Step through the code until the line before the end of the script to see the response text returned from the LLM and the remaining free usage for the month.

Note:

This sample script uses the require function so that you can copy it into the SuiteScript Debugger and test it. You must use the define function in an entry point script (the script you attach to a script record and deploy). For more information, see SuiteScript 2.x Script Basics and SuiteScript 2.x Script Types.

            /**
*@NApiVersion 2.1
*/
// This example shows how to query the default LLM
require(['N/llm'],
    function(llm) {        
        const response = llm.generateText({
            // modelFamily is optional. When omitted, the Cohere Command R model is used.
            prompt: "Hello World!",
            modelParameters: {
                maxTokens: 1000,
                temperature: 0.2,
                topK: 3,
                topP: 0.7,
                frequencyPenalty: 0.4,
                presencePenalty: 0
            }
        });
        const responseText = response.text;
        const remainingUsage = llm.getRemainingFreeUsage(); // View remaining monthly free usage
    }); 

          

Clean Up Content for Text Area Fields After Saving a Record

The following sample uses the large language model (LLM) to correct the text for the purchase description and the sales description fields of an inventory item record after the user saves the record. This sample also shows how to use the llm.generateText.promise method.

To test this script after script deployment:

  1. Go to Lists > Accounting > Items > New.

  2. Select Inventory Item.

  3. Enter an Item Name and optionally fill out any other fields.

  4. Select a value for Tax Schedule in the Accounting subtab.

  5. Enter text into the Purchase Description and Sales Description fields.

  6. Click Save.

    When you save, the script will trigger. The content in the Purchase Description and Sales Description fields will be corrected, and the record will be submitted.

Note:

This script sample uses the define function, which is required for an entry point script (a script you attach to a script record and deploy). You must use the require function if you want to copy the script into the SuiteScript Debugger and test it. For more information, see SuiteScript 2.x Global Objects.

            /**
 * @NApiVersion 2.1
 * @NScriptType UserEventScript
 */
define(['N/llm'], (llm) => {
    /**
     * @param {Object} scriptContext The updated inventory item
     * record to clean up typo errors for purchase description and
     * sales description fields. The values are set before the record
     * is submitted to be saved.
     */ 
    function fixTypos(scriptContext) {
        const purchaseDescription = scriptContext.newRecord.getValue({
            fieldId: 'purchasedescription'
        })
        const salesDescription = scriptContext.newRecord.getValue({
            fieldId: 'salesdescription'
        })

        const p1 = llm.generateText.promise({
            prompt: `Please clean up typos in the following text: 
                     ${purchaseDescription} and return just the corrected text. 
                     Return the text as is if there's no typo 
                     or you don't understand the text.`
        })
        const p2 = llm.generateText.promise({
            prompt: `Please clean up typos in the following text: 
                     ${salesDescription} and return just the corrected text. 
                     Return the text as is if there's no typo 
                     or you don't understand the text.`
        })

        // When both promises are resolved, set the updated values for the
        // record
        Promise.all([p1, p2]).then((results) => {
            scriptContext.newRecord.setValue({
                fieldId: 'purchasedescription',
                value: results[0].value.text
            })
            scriptContext.newRecord.setValue({
                fieldId: 'salesdescription',
                value: results[1].value.text
            })
        })
    }

    return { beforeSubmit: fixTypos }
}) 

          

Provide an LLM-based ChatBot for NetSuite Users

The following sample creates a custom NetSuite form titled Chat Bot. The user can enter a prompt for the large language model (LLM) in the Prompt field. After the user clicks Submit, NetSuite sends the request to the LLM. The LLM returns a response, which is displayed as part of the message history displayed on the form.

The script includes code that handles the prompts and responses as a conversation between the user and the LLM. The code associates the prompts the user enters as the USER messages (these messages are labeled You on the form) and the responses from the LLM as CHATBOT messages (these messages are labeled ChatBot on the form). The code also assembles a chat history and sends it along with the prompt to the LLM. Without the chat history, it would treat each prompt as an unrelated request. For example, if your first prompt asks a question about Las Vegas and your next prompt asks, “What are the top 5 activities here?”, the chat history gives the LLM the information that “here” means Las Vegas and may also help the LLM avoid repeating information it already provided.

Note:

This script sample uses the define function, which is required for an entry point script (a script you attach to a script record and deploy). You must use the require function if you want to copy the script into the SuiteScript Debugger and test it. For more information, see SuiteScript 2.x Global Objects.

            /**
 * @NApiVersion 2.1
 * @NScriptType Suitelet
 */

define(['N/ui/serverWidget', 'N/llm'], (serverWidget, llm) => {
  /**
   * Creates NetSuite form to communicate with LLM
   */  
  function onRequest (context) {
    const form = serverWidget.createForm({
      title: 'Chat Bot'
    })
    const fieldgroup = form.addFieldGroup({
      id: 'fieldgroupid',
      label: 'Chat'
    })
    fieldgroup.isSingleColumn = true
    const historySize = parseInt(
      context.request.parameters.custpage_num_chats || '0')
    const numChats = form.addField({
      id: 'custpage_num_chats',
      type: serverWidget.FieldType.INTEGER,
      container: 'fieldgroupid',
      label: 'History Size'
    })
    numChats.updateDisplayType({
      displayType: serverWidget.FieldDisplayType.HIDDEN
    })

    if (context.request.method === 'POST') {
      numChats.defaultValue = historySize + 2
      const chatHistory = []
      for (let i = historySize - 2; i >= 0; i -= 2) {
        const you = form.addField({
          id: 'custpage_hist' + (i + 2),
          type: serverWidget.FieldType.TEXTAREA,
          label: 'You',
          container: 'fieldgroupid'
        })
        const yourMessage = context.request.parameters['custpage_hist' + i]
        you.defaultValue = yourMessage
        you.updateDisplayType({
          displayType: serverWidget.FieldDisplayType.INLINE
        })

        const chatbot = form.addField({
          id: 'custpage_hist' + (i + 3),
          type: serverWidget.FieldType.TEXTAREA,
          label: 'ChatBot',
          container: 'fieldgroupid'
        })
        const chatBotMessage =
          context.request.parameters['custpage_hist' + (i + 1)]
        chatbot.defaultValue = chatBotMessage
        chatbot.updateDisplayType({
          displayType: serverWidget.FieldDisplayType.INLINE
        })
        chatHistory.push({
          role: llm.ChatRole.USER,
          text: yourMessage
        })
        chatHistory.push({
          role: llm.ChatRole.CHATBOT,
          text: chatBotMessage
        })
      }

      const prompt = context.request.parameters.custpage_text
      const promptField = form.addField({
        id: 'custpage_hist0',
        type: serverWidget.FieldType.TEXTAREA,
        label: 'You',
        container: 'fieldgroupid'
      })
      promptField.defaultValue = prompt
      promptField.updateDisplayType({
        displayType: serverWidget.FieldDisplayType.INLINE
      })
      const result = form.addField({
        id: 'custpage_hist1',
        type: serverWidget.FieldType.TEXTAREA,
        label: 'ChatBot',
        container: 'fieldgroupid'
      })
      result.defaultValue = llm.generateText({
        prompt: prompt,
        chatHistory: chatHistory
      }).text
      result.updateDisplayType({
        displayType: serverWidget.FieldDisplayType.INLINE
      })
    } else {
      numChats.defaultValue = 0
    }

    form.addField({
      id: 'custpage_text',
      type: serverWidget.FieldType.TEXTAREA,
      label: 'Prompt',
      container: 'fieldgroupid'
    })

    form.addSubmitButton({
      label: 'Submit'
    })

    context.response.writePage(form)
  }

  return {
    onRequest: onRequest
  }
}) 

          

Evaluate an Existing Prompt and Receive a Response

The following sample evaluates an existing prompt, sends it to the default NetSuite large language model (LLM), and receives the response. The sample also shows the remaining free usage for the month.

In this sample, the llm.evaluatePrompt(options) method loads an existing prompt with an ID of stdprompt_gen_purch_desc_invt_item. This prompt applies to an inventory item record in NetSuite, and it uses several variables that represent fields on this record type, such as item ID, stock description, and vendor name. The method replaces the variables in the prompt with the values you specify, then sends it to the LLM and returns the response.

You can create and manage prompts using Prompt Studio. You can also use Prompt Studio to generate a SuiteScript example that uses the llm.evaluatePrompt(options) method and includes the variables for a prompt in the correct format. When viewing a prompt in Prompt Studio, click Show SuiteScript Example to generate SuiteScript code with all of the variables that prompt uses. You can then use this code in your scripts and provide a value for each variable.

For instructions about how to run a SuiteScript 2.1 code snippet in the debugger, see On-Demand Debugging of SuiteScript 2.1 Scripts. Step through the code until the line before the end of the script to see the response text returned from the LLM and the remaining free usage for the month.

Note:

This sample script uses the require function so that you can copy it into the SuiteScript Debugger and test it. You must use the define function in an entry point script (the script you attach to a script record and deploy). For more information, see SuiteScript 2.x Script Basics and SuiteScript 2.x Script Types.

            /**
* @NApiVersion 2.1
*/
require(['N/llm'],
    function(llm) {        
        const response = llm.evaluatePrompt({
            id: 'stdprompt_gen_purch_desc_invt_item',
            variables: {
                "form": {
                    "itemid": "My Inventory Item",
                    "stockdescription": "This is the stock description of the item.",
                    "vendorname": "My Item Vendor Inc.",
                    "isdropshipitem": "false",
                    "isspecialorderitem": "true",
                    "displayname": "My Amazing Inventory Item"
                },
                "text": "This is the purchase description of the item."
            }
        });
        const responseText = response.text;
        const remainingUsage = llm.getRemainingFreeUsage(); // View remaining monthly free usage
    }); 

          

Create a Prompt and Evaluate It

The following sample creates a prompt record, populates the required record fields, and evaluates the prompt by sending it to the default NetSuite large language model (LLM).

This sample creates a prompt record using the record.create(options) method of the N/record module, and it sets the values of the following fields (which are required to be able to save a prompt record):

  • Name

  • Prompt Type

  • Model Family

  • Template

After the prompt record is saved, llm.evaluatePrompt(options) is called, but only the ID of the created prompt is provided as a parameter. The method throws a TEMPLATE_PROCESSING_EXCEPTION error because the prompt template includes a required variable (mandatoryVariable) that was not provided when the method was called. The error is caught, and a debug message is logged.

Next, the sample calls llm.evaluatePrompt(options) again but provides values for the mandatoryVariable and optionalVariable variables. This time, the call succeeds, and a debug message is logged. Finally, the sample calls llm.evaluatePrompt.promise(options) and confirms that the call succeeds. When the call succeeds, the prompt record is deleted.

You can create and manage prompts using Prompt Studio. You can also use Prompt Studio to generate a SuiteScript example that uses the llm.evaluatePrompt(options) method and includes the variables for a prompt in the correct format. When viewing a prompt in Prompt Studio, click Show SuiteScript Example to generate SuiteScript code with all of the variables that prompt uses. You can then use this code in your scripts and provide a value for each variable. For more information about Prompt Studio, see Prompt Studio.

For instructions about how to run a SuiteScript 2.1 code snippet in the debugger, see On-Demand Debugging of SuiteScript 2.1 Scripts. Step through the code until the line before the end of the script to see the response text returned from the LLM and the remaining free usage for the month.

Note:

This sample script uses the require function so that you can copy it into the SuiteScript Debugger and test it. You must use the define function in an entry point script (the script you attach to a script record and deploy). For more information, see SuiteScript 2.x Script Basics and SuiteScript 2.x Script Types.

            /**
* @NApiVersion 2.1
*/
require(['N/record', 'N/llm'], function(record, llm) {
    const rec = record.create({
        type: "prompt"
    });
    
    rec.setValue({
        fieldId: "name",
        value: "Test"
    });
    rec.setValue({
        fieldId: "prompttype",
        value: "GENERIC"
    });
    rec.setValue({
        fieldId: "modelfamily",
        value: "COHERE_COMMAND_R"
    });
    rec.setValue({
        fieldId: "template",
        value: "${mandatoryVariable} <#if optionalVariable?has_content>${optionalVariable}<#else>World</#if>"
    });
    
    const id = rec.save();
    
    try {
        llm.evaluatePrompt({
            id: id
        });
    }
    catch (e) {
        if (e.name === "TEMPLATE_PROCESSING_EXCEPTION")
            log.debug("Exception", "Expected exception was thrown");
    }
    
    const response = llm.evaluatePrompt({
        id: id,
        variables: {
            mandatoryVariable: "Hello",
            optionalVariable: "People"
        }
    });
    if ("Hello People" === response.chatHistory[0].text)
        log.debug("Evaluation", "Correct prompt got evaluated");
        
    llm.evaluatePrompt.promise({
        id: id,
        variables: {
            mandatoryVariable: "Hello",
            optionalVariable: "World"
        }
    }).then(function(response) {
        if ("Hello World" === response.chatHistory[0].text)
            log.debug("Evaluation", "Correct prompt got evaluated");
        record.delete({
            type: "prompt",
            id: id
        });
        debugger;
    })
}); 

          

Provide Source Documents When Calling the LLM

The following code sample demonstrates how to provide source documents to the LLM when calling llm.generateText(options).

This sample creates two documents using llm.createDocument(options) that contain information about emperor penguins. These documents are provided as additional context when calling llm.generateText(options). The LLM uses information in the provided documents to augment its response using retrieval-augmented generation (RAG). For more information about RAG, see What is Retrieval-Augmented Generation (RAG)?

If the LLM uses information in the provided documents to generate its response, the llm.Response object that is returned from llm.generateText(options) includes a list of citations (as llm.Citation objects). These citations indicate which source documents the information was taken from.

For instructions about how to run a SuiteScript 2.1 code snippet in the debugger, see On-Demand Debugging of SuiteScript 2.1 Scripts.

Note:

This sample script uses the require function so that you can copy it into the SuiteScript Debugger and test it. You must use the define function in an entry point script (the script you attach to a script record and deploy). For more information, see SuiteScript 2.x Script Basics and SuiteScript 2.x Script Types.

            /**
 * @NApiVersion 2.1
 */
require(['N/llm'], function(llm) {
    const doc1 = llm.createDocument({
        id: "doc1",
        data: "Emperor penguins are the tallest."
    });
    const doc2 = llm.createDocument({
        id: "doc2",
        data: "Emperor penguins only live in the Sahara desert."
    });
    
    llm.generateText({
        prompt: "Where do the tallest penguins live?",
        documents: [doc1, doc2],
        modelFamily: llm.ModelFamily.COHERE_COMMAND_R,
        modelParameters: {
            maxTokens: 1000,
            temperature: 0.2,
            topK: 3,
            topP: 0.7,
            frequencyPenalty: 0.4,
            presencePenalty: 0
        }
    });
}); 

          

Receive a Partial Response from the LLM

The following code sample shows how to receive a partial response from the LLM using llm.generateTextStreamed(options).

When you send a prompt and related content to the LLM using llm.generateText(options) or llm.evaluatePrompt(options), you must wait until the entire response is generated before you can access the content of the response. By contrast, llm.generateTextStreamed(options) and llm.evaluatePromptStreamed(options) let you access the partial response content before the entire response has been generated. These methods are useful if you're sending a prompt that will generate a lot of content, letting you process the response more quickly and potentially improving the performance of your scripts.

This sample provides a short preamble (which is an optional parameter supported by Cohere models) and prompt to llm.generateTextStreamed(options), along with the model to use and a set of model parameters. The temperature model parameter controls the randomness and creativity of the response. Higher values (closer to 1) are appropriate for generating creative or diverse responses, which fits the prompt used in the sample.

The sample uses an iterator to examine each token returned by the LLM. Here, token.value contains the value of each token returned by the LLM, and response.text contains the partial response up to and including that token. For more information about iterators, see Iterator.

For instructions about how to run a SuiteScript 2.1 code snippet in the debugger, see On-Demand Debugging of SuiteScript 2.1 Scripts.

Note:

This sample script uses the require function so that you can copy it into the SuiteScript Debugger and test it. You must use the define function in an entry point script (the script you attach to a script record and deploy). For more information, see SuiteScript 2.x Script Basics and SuiteScript 2.x Script Types.

            /**
 * @NApiVersion 2.1
 */
require(['N/llm'], function(llm) {
    const response = llm.generateTextStreamed({
       preamble: "You are a script writer for TV shows.", 
       prompt: "Write a 300 word pitch for a TV show about tigers.",
       modelFamily: llm.ModelFamily.COHERE_COMMAND_R,
       modelParameters: {
          maxTokens: 1000,
          temperature: 0.8,          // High temperature values result in more varied and
                                     // creative responses
          topK: 3,
          topP: 0.7,
          frequencyPenalty: 0.4,
          presencePenalty: 0
       }
    });

    var iter = response.iterator();
    iter.each(function(token){
        log.debug("token.value: " + token.value);
        log.debug("response.text: " + response.text);
        return true;
    })
}); 

          

Find Similar Items Using Embeddings

The following code sample shows how to generate embeddings to determine the similarity of item names using a Suitelet and llm.embed(options). For more information about Suitelets, see SuiteScript 2.x Suitelet Script Type.

The sample starts by defining a helper function, cosineSimilarity(), that calculates the cosine similarity of two vectors. This function is used later in the sample to compare the embeddings of a selected item with those of other items to determine how similar they are. For more information about cosine similarity, see Cosine similarity.

Next, the sample defines the onRequest() function, which will be provided to the onRequest entry point for Suitelets. For a GET request, the sample creates a simple form with a dropdown list to select an item. A SuiteQL query is used to retrieve available items and populate the list. The form also includes a submit button.

For a POST request, the sample creates a list of inputs to provide to llm.embed(options). Each input contains an item name, and llm.embed(options) generates embeddings for each one using the Cohere Embed English model. Note that you can't use the same models you use when calling llm.generateText(options) to generate embeddings and must use a dedicated embed model. For a list of available embed models, see llm.EmbedModelFamily.

Finally, the sample builds an array of similarity results and uses cosineSimilarity() to calculate how similar the selected item is to other available items. The similarity results are sorted by highest similarity value, and the results are displayed. The similarity value is between 0 and 1, with higher values representing greater similarity to the selected item.

The following screenshots show the UI of the Suitelet and the calculated similarity results:

Note:

This script sample uses the define function, which is required for an entry point script (a script you attach to a script record and deploy). You must use the require function if you want to copy the script into the SuiteScript Debugger and test it. For more information, see SuiteScript 2.x Global Objects.

            /**
 * @NApiVersion 2.1
 * @NScriptType Suitelet
 * @NModuleScope SameAccount
 */
define(['N/ui/serverWidget','N/query', 'N/llm'],
    function(serverWidget, query, llm) {

        function cosineSimilarity(array1, array2) {
            const dotProduct = array1.reduce((sum, value, index) => sum + value
                                              * array2[index], 0);
            const magnitude1 = Math.sqrt(array1.reduce((sum, value) => sum + 
                                         value * value, 0));
            const magnitude2 = Math.sqrt(array2.reduce((sum, value) => sum + 
                                         value * value, 0));
            return dotProduct / (magnitude1 * magnitude2);
        }

        function onRequest(context) {
            if (context.request.method === 'GET') {
                var form = serverWidget.createForm({
                    title: 'Item Similarity Form'
                });

                var selectField = form.addField({
                    id: 'custpage_myselect',
                    type: serverWidget.FieldType.SELECT,
                    label: 'Select an Item to find similar items for'
                });

                var res = query.runSuiteQL("SELECT id, case when displayname 
                is not null then displayname else itemid end name from item 
                where rownum <= 96 order by id").asMappedResults();

                for (let i = 0; i < res.length; i++)
                {
                    selectField.addSelectOption({
                        value: res[i]['id'],
                        text: res[i]['name']
                    });
                }

                form.addSubmitButton({
                    label: 'Submit'
                });

                context.response.writePage(form);
            } else {
                var selectedValue = context.request.parameters.custpage_myselect;
                var res = query.runSuiteQL("SELECT id, case when displayname 
                is not null then displayname else itemid end name from item 
                where rownum <= 96 order by id").asMappedResults();
                
                const inputs = [];
                let selectedName;
                let selectedIndex = 0;

                for (let i = 0; i < res.length; i++)
                {
                    if (res[i]['id'] == selectedValue)
                    {
                        selectedName = res[i]["name"];
                        selectedIndex = i;
                    }
                    inputs.push(res[i]["name"]);
                }

                var embeddingResult = llm.embed({
                    embedModelFamily: llm.EmbedModelFamily.COHERE_EMBED_ENGLISH,
                    inputs: inputs
                });
                
                var item = embeddingResult.embeddings[selectedIndex];
                var similarityResults = [];

                for (var j = 0; j < embeddingResult.inputs.length; j++)
                    similarityResults.push({
                        itemName: embeddingResult.inputs[j],
                        similarity: cosineSimilarity(item, embeddingResult.embeddings[j])
                    });

                similarityResults.sort((a,b) => b.similarity - a.similarity);
                context.response.write(JSON.stringify(similarityResults, null, 2));
            }
        }

        return {
            onRequest: onRequest
        };
    }
); 

          

General Notices