Data Profiling and Caching

NetSuite uses proactive monitoring, data profiling, and caching to ensure performance and scalability.

Database performance is optimized by analyzing data distribution. The data profile is used to determine optimal query execution. NetSuite proactively monitors data changes on an ongoing basis to maintain a current data profile.

Caching means to store recently used data, code, and queries for future use. Caching is used at strategic points in the application to improve performance and scalability. Caching occurs at the database, application infrastructure, network, and browser levels. Content that can be loaded from a cache improves server, network, and client response time.

Changes in data and configuration can affect caching, data profiles, and performance. Code changes, new releases, bundle installations or upgrades, SuiteApp deployments, or enabling new features can invalidate a cache. Customization changes in SuiteScript scripts, workflows, templates, forms, and fields can have a similar effect on caching and performance. These changes can result in temporary latency because new code and assets are not yet cached or optimized.

Large imports, deletions, record changes, or new custom record types can temporarily affect application performance. Rapid changes in data can invalidate the existing data profile and result in temporary latency until the database can gather statistics to achieve optimal query performance.

The frequency, volume, and timing of system use can also affect performance. NetSuite is designed to perform at scale. Infrequently used record operations may experience increased latency because they cannot fully benefit from caching. Logging out, changing roles, or using per-user customization can decrease the effectiveness of caching.

Data Profiling and Caching Considerations

Minimize role switching. Frequently switching roles can invalidate session-level caching and role-level caching.

Limit unique forms, sublists, lists, and fields.

Some tasks that are only done infrequently may experience latency when they are performed for the first time. Grouping these infrequently used activities into one session may mitigate the latency by avoiding first-use latency for subsequent actions.

Data changes, and large data imports should be performed outside of business hours, and well in advance of any go-live dates. Allowing time for the system to process the data changes and gather statistics can mitigate the impact of large data changes.

Monitoring

Using the Record Pages Monitor in Application Performance Management (APM), you can review the aggregated performance of your top ten records, and any records on your watch list.

Using the Response Time chart and Throughput chart, you can compare the response time to the volume of record instances. It is not uncommon to see increased response times after periods of inactivity and infrequent use. You can test response time by performing multiple similar operations. Perform three or four operations of the same type on the same record type, and you should see the performance impact affect only the first operation. For more information, see Record Pages Monitor Charts.

You can use the Response Time chart to compare the time line of performance after system updates, configuration changes, or large data imports.

Using the custom date and time range filter, you can compare the median times for two different time ranges across your monitored records. For more information, see Adding a Custom Date and Time Range on Record Pages Monitor.

Related Topics

General Notices