Associated Risks, Controls, and Mitigation Strategies
While AI agents and large language models (LLMs) offer significant benefits, their use can introduce additional risks to organizations. This topic is intended for both end users who will interact with AI agents and account administrators responsible for configuring NetSuite and managing this technology within the organization.
This topic outlines the key risks associated with the use of external AI agents and LLMs, security controls available in NetSuite, and suggest mitigation strategies. Note that this list may not be exhaustive or universally applicable, as both technology and associated risks continue to evolve.
Risks
The following are some of the key risks inherent to the use of LLMs:
-
Prompt injection occurs when a malicious actor embeds hidden instructions within content that is processed by the LLM. This can cause the AI agent to perform unintended actions, such as executing unauthorized commands or leaking sensitive data. These hidden instructions can be placed in various sources, including PDF documents, web pages, or responses from third-party MCP servers or MCP tools added by end users.
-
Hallucination refers to situations where the LLM generates information that appears accurate but is in fact incorrect or entirely fabricated.
Both prompt injection and hallucination can result in:
-
Unintended Actions – The AI agent may run powerful MCP tool functions – such as making payments or granting approvals – without the user’s explicit intent.
-
Corruption of Data – The AI agent may call MCP tools that delete or modify data in ways not intended by the user, potentially leading to data loss or integrity issues.
-
Sensitive Information Disclosure – The AI agent may access sensitive data from NetSuite and disclose it to unauthorized parties.
Controls in NetSuite
Prompt injection and hallucination are LLM weaknesses and out of NetSuite control. Although NetSuite cannot eliminate these risks, it offers controls that account administrators and end users can use to reduce the impact of these risks.
-
Account administrators have full control over which users are granted access to MCP. By default, no users have access. MCP permission must be explicitly granted to a role.
-
MCP tools run with the same permissions as a NetSuite user who is using an external AI agent.
-
MCP tools are never executed with Administrator or roles that have full permissions to access NetSuite features.
-
MCP tools can only access a subset of SuiteScript API actions. Specifically:
-
"Run as role" is disabled for all tools.
-
Tools cannot invoke any SuiteScript scripts that run with elevated privileges.
-
Tools cannot invoke Suitelets.
-
Tools cannot perform HTTP requests to external destinations.
-
-
All usage of MCP tools is logged, providing traceability and accountability for actions performed by AI agents.
-
During the OAuth 2.0 authorization flow, the MCP server obtains explicit consent from each user for every AI agent.
-
End users can scope MCP tools for an AI agent by specifying the MCP tools namespace.
Enabling External AI Agents in NetSuite
By default, the use of external AI agents in NetSuite is disabled. Enabling this feature requires coordinated actions from both account administrators and end users:
Steps for Account Administrators
-
Assign MCP permissions – Grant MCP permissions to users who are authorized to use the feature.
-
Install MCP tools – Install the MCP tools, which define the specific actions that external AI agents can perform.
The actions available to external AI agents are strictly limited to the functionality exposed by the installed MCP tools. Because external AI agents act on behalf of users, only agents representing users with MCP permissions can call MCP tool functions.
Steps for End Users
-
Configure an external AI agent.
-
Authorize the external AI agent within your NetSuite account to act on your behalf.
Mitigation Strategies
Prompt injection and hallucination are known LLM weaknesses and out of NetSuite control. The following strategies can help to reduce the risks of unintended actions, data corruption, and sensitive information disclosure:
Vendor and Tool Trustworthiness
-
Use only trusted AI agents. Review their documentation or contact vendors to understand how they address prompt injection and hallucination risks.
-
Connect to trusted MCP servers.
-
Use only trusted MCP tools.
Access Management
-
Grant MCP permission only to users who require it.
-
Do not assign MCP permission to users with high privileges. NetSuite does not allow Administrator roles or roles that have full permissions to access NetSuite features to use MCP.
-
Create separate roles for different MCP tools to further limit the scope of what an external AI agent can do.
-
Regularly review and update the permissions for all MCP tools and end users.
Scope Limitation
-
Install and enable only the MCP tools that are necessary for your business needs.
-
When trying new external AI agents, MCP servers, or MCP tools, start with a limited scope to minimize potential impact if something goes wrong.
-
Encourage end users to carefully select which MCP tools are enabled in their AI agents using MCP tools namespaces.
User Awareness
-
Prefer external AI agents that prompt users for confirmation before executing sensitive or high-impact actions.
-
Train end users on the risks of external AI agents and LLMs, and best practices for safe usage.
Technical Safeguard
-
Carefully consider simultaneous use of MCP tools that allow access to your local file system or other internal or external systems, or ensure they are run in a secure sandbox environment.
Compliance Risks
As part of your use of MCP get familiar with potential limitations or restrictions establish in the regulations where you operate that may affect your use of existing tools or the use of new tools created by you. Certain geographies have restrictions and requirements for certain use cases like HR, financial, etc.