Map/Reduce Governance
As with all script types, NetSuite imposes usage limits on map/reduce scripts. Map/reduce script governance rules have two main categories:
-
Some limits interrupt the current function invocation if exceeded. These limits are known as hard limits.
-
Other limits are managed automatically by the system. These limits don't interrupt function invocations. Instead, they can make a job yield and reschedule its work after the function invocation is complete. These limits are known as soft limits.
See the following sections for more details:
Note that the system doesn't limit the total duration of a map/reduce script deployment instance. Users can't set a limit on this either. The system's limits regulate specific parts of the deployment, like a single function invocation's duration.
One way that NetSuite measures a script’s activity is through usage units. For more information about usage units, see SuiteScript Governance and Limits.
Hard Limits on Total Persisted Data
A map/reduce script can't use more than 200MB of persisted data at any time. If you exceed this limit, the system throws a PERSISTED_DATA_LIMIT_FOR_MAPREDUCE_SCRIPT_EXCEEDED error. The script also ends its current function invocation, exits the current stage, and moves to the summarize stage. (This error does not occur in the summarize stage, because the total persisted data cannot be increased during that stage.)
Persisted data is calculated by adding up the following:
-
Total size of all keys and values not yet mapped
-
Total size of all keys and values not yet reduced
-
Total size of all keys and values written as results in reduce
After processing, keys and values aren't counted toward the total storage size. If you have data that is preserved for internal usage, such as troubleshooting or analytics, the data will not be counted toward the total user-facing storage size enforced by the script.
The system takes into account any search results retrieved and returned by the input function. A large number of columns in a result set can increase the data usage significantly.
During the map and reduce stages, the total size is a measure of the keys and values yet to be processed. After processing, keys and values don't count toward the limit.
Hard Limits on Function Invocations
The table below shows the limits for map/reduce script function invocations. If you exceed these limits, the system throws an SSS_USAGE_LIMIT_EXCEEDED error. The system's response to this error depends on the stage and script configuration, as shown in the table.
Stage |
Limits per function invocation |
Response to SSS_USAGE_LIMIT_EXCEEDED error |
---|---|---|
getInputData |
|
The script ends the function invocation and exits the stage. It proceeds directly to the summarize stage. |
map |
|
The response has two parts. You can configure the second part:
|
reduce |
|
|
summarize |
|
The script stops executing. |
Script governance applies to each script invocation, not the entire execution.
As shown in the table above, a single execution of the map function should not:
-
Use more than 1,000 usage points (same as Mass Update script)
-
Run for more than 5 minutes
-
Exceed the instruction count limit for User Event scripts
A single execution of the reduce function should not:
-
Use more than 5,000 usage points
-
Run for more than 15 minutes
-
Exceed the instruction count limit for User Event scripts
A single execution of the getInputData or summarize function should not:
-
Use more than 10,000 usage points
-
Run for more than 60 minutes
-
Exceed the instruction count limit for Scheduled scripts
If you're using map/reduce scripts as intended, you shouldn't come close to these limits, especially for map and reduce functions. In general, each invocation of a map or reduce function should do a relatively small portion of work. For more details, see the Map/Reduce Script Best Practices section in the SuiteScript Developer Guide.
Soft Limits on Long-Running Map and Reduce Jobs
In addition to the limits described in Hard Limits on Function Invocations, the system also has a soft limit of 10,000 usage units per map and reduce job.
To understand this limit, note that all map/reduce scripts are processed by SuiteCloud Processors. A processor is a virtual unit of processing power that runs a job.
The 10,000-unit soft limit is a mechanism designed to prevent any processor from being monopolized by a long-running map or reduce job. During the map and reduce stages, after each function invocation, the system checks the total number of units that have been used by the job. If the total usage has surpassed 10,000 units, the job gracefully ends its execution and a new job is created to take its place. The new job has the same priority, but a later timestamp. This is called yielding.
Yielding is also affected by the script deployment record’s Yield After Minutes field. This time limit works similarly to the 10,000-unit limit: The system checks the time limit after each function invocation ends. If the time limit is exceeded, the job yields, even if the 10,000-unit limit hasn't been reached. Yield After Minutes defaults to 60, but you can set it between 3 and 60. For more details, see Yield After Minutes. See also Map/Reduce Yielding.
If your map/reduce script triggers other operations, like scripts or workflows, they don't affect the initial time and usage limits. For example, if your map/reduce script triggers multiple user event scripts, the execution time of the user event scripts will not count toward the initial map/reduce time limit that is set. In these instances, the real execution time may exceed the soft limit of 60 minutes, but it won't count against script execution time.
Per enqueue limits are soft limits that are in queue at any particular stage of the script instance. The script checks per enqueue limits after each function invocation, but won't stop execution when the limit is reached. When the script exceeds the soft limit, it yields control of the queue after the current execution finishes.
Yielding is separate from the limits for single map and reduce function invocations. Those limits are explained in Hard Limits on Function Invocations. Exceeding these limits throws an SSS_USAGE_LIMIT_EXCEEDED error and ends the function invocation, even if it's not finished.
Character Limits on Map and Reduce Jobs
The character limit for keys in map/reduce scripts (specifically, in mapContext or reduceContext objects) is 3,000 characters. You'll get error messages if keys exceed 3,000 characters or values are over 10 MB. Keys over 3,000 characters return a KEY_LENGTH_IS_OVER_3000_BYTES error. Values over 10 MB return a VALUE_LENGTH_IS_OVER_10_MB error.
If you have map/reduce scripts that use the mapContext.write(options) or reduceContext.write(options) methods, make sure that key strings are shorter than 3,000 characters and value strings are smaller than 10 MB. Consider the potential length of dynamically generated strings, as they may exceed these limits. Avoid using keys to pass data; use values instead.