Enabling statistics will incur some CPU cost for data collection and aggregation. In many situations, this overhead will not make a noticeable difference on system performance. However for systems under maximum load, including benchmark loads, the small overhead of statistic collection can begin to be noticeable.
Here are some tips for handling execution overheads:
For dynamic statistics, only archive those that are important to record 24x7.
Statistics can be suspended, eliminating data collection and the collection overhead. This may be useful if gathering a short interval of a statistic is sufficient for your needs (such as troubleshooting performance). Enable the statistic, wait some minutes, then click the power icon in the Datasets view to suspend it. Suspended datasets keep their data for later viewing.
Keep an eye on overall performance via the static statistics when enabling and disabling dynamic statistics.
Be aware that drilldowns will incur overhead for all events. For example, you may trace "NFSv3 operations per second for client deimos", when there is currently no NFSv3 activity from deimos. This doesn't mean that there is no execution overhead for this statistic. The appliance must still trace every NFSv3 event, then compare the host with "deimos" to see if the data should be recorded in this dataset - however we have already paid most of the execution cost at this point.
Some statistics are sourced from operating system counters are always maintained, which may be called static statistics. Gathering these statistics has negligible effect on the performance of the system, since to an extent the system is already maintaining them (they are usually gathered by an operating system feature called Kstat). Examples of these statistics are:
When seen in the BUI, those from the above list without "broken down by" text may have "as a raw statistic".
Since these statistics have negligible execution cost and provide a broad view of system behaviour, many are archived by default. See Default Statistics.
These statistics are created dynamically, and are not usually maintained by the system (they are gathered by an operating system feature called DTrace). Each event is traced, and each second this trace data is aggregated into the statistic. And so the cost of this statistic is proportional to the number of events.
Tracing disk details when the activity is 1000 ops/sec is unlikely to have a noticeable affect on performance, however measuring network details when pushing 100,000 packets/sec is likely to have a negative effect. The type of information gathered is also a factor: tracing file names and client names will increase the performance impact.
Examples of dynamic statistics include:
"..." denotes any of the protocols.
The best way to determine the impact of these statistics is to enable and disable them while running under steady load. Benchmark software may be used to apply that steady load. See Working with Analytics for the steps to calculate performance impact in this way.