llm.SafetyMode

Note:

The content in this help topic pertains to SuiteScript 2.1.

Enum Description

The safety mode to be used for LLM requests.

Safety mode is available for Cohere models only and is designed to help filter and moderate content generated by the LLM. When using strict mode or contextual mode, the LLM may refuse to provide certain responses that include sensitive, harmful, or illegal suggestions.

Use this enum to set the value of the options.safetyMode parameter in llm.generateText(options) and llm.generateTextStreamed(options).

Note:

JavaScript does not include an enumeration type. The SuiteScript 2.x documentation uses the term enumeration (or enum) to describe a plain JavaScript object with a flat, map-like structure. In this object, each key points to a read-only string value.

Module

N/llm Module

Supported Script Types

Server scripts

For more information, see SuiteScript 2.x Script Types.

Since

2025.1

Values

Enum

Value

Notes

SafetyMode.CONTEXTUAL

CONTEXTUAL

This mode offers a less restrictive approach than strict mode while still rejecting harmful or illegal suggestions. This mode is suited for creative, entertainment, or academic purposes.

SafetyMode.OFF

OFF

This mode removes the restrictions for the LLM's response and may return unsafe, risky, or illegal suggestions.

Some content filtering is applied even when safety mode is off, and you could receive an INAPPROPRIATE_CONTENT_DETECTED error if you ask the LLM to provide content that is deemed inappropriate, unsafe, or illegal.

Warning:

Use caution when using this mode and always review the responses for suitability.

SafetyMode.STRICT

STRICT

This mode aims to avoid sensitive topics entirely and is suited for corporate communications and customer service.

This is the default mode when calling llm.generateText(options) or llm.generateTextStreamed(options).

Syntax

Important:

The following code sample shows the syntax for this member. It isn't a functional example. For a complete script example, see N/llm Module Script Samples.

            // Add additional code
...

const response = llm.generateText({
   prompt: "Hello world!",
   safetyMode: llm.SafetyMode.CONTEXTUAL
});

...
// Add additional code 

          

General Notices