Confidence Intervals

Because Monte Carlo simulation uses random sampling to estimate model results, statistics computed on these results, such as mean, standard deviation and percentiles, always contain some kind of error. A confidence interval (CI) is a bound calculated around a statistic that attempts to measure this error with a given level of probability. For example, a 95 percent confidence interval around the mean statistic is defined as a 95 percent chance that the mean will be contained within the specified interval. Conversely, a 5 percent chance exists that the mean will lie outside the interval. Shown graphically, a confidence interval around the mean looks like Figure 8, Confidence Interval.

Figure 8. Confidence Interval

This image shows a horizontal line with two even segments. The left end of the line is labeled CI subscript min, the center is labeled Mean, and the right end of the line is labeled CI subscript max.

For most statistics, the confidence interval is symmetrical around the statistic, so that x = (CImax - Mean) = (Mean - CImin). This accuracy lets you make statements of confidence such as “the mean will lie within the estimated mean plus or minus x with 95 percent probability.”

Confidence intervals are important for determining the accuracy of statistics, hence, the accuracy of the simulation. Generally speaking, as more trials are calculated, the confidence interval narrows and the statistics become more accurate. The precision control feature of Crystal Ball lets you stop the simulation when the specified precision of the chosen statistics is reached. Crystal Ball periodically checks whether the confidence interval is less than the specified precision.

Notice that the Bootstrap tool in Crystal Ball enables you to calculate the confidence intervals for any set of statistics using empirically-based methods.

The following sections describe how Crystal Ball calculates the confidence interval for each statistic.