20 Exponential Smoothing
Learn about the Exponential Smoothing algorithm.
20.1 About Exponential Smoothing
Exponential smoothing is a forecasting method for time series data. It is a moving average method where exponentially decreasing weights are assigned to past observations.
Exponential smoothing methods have been widely used in forecasting for over half a century. A forecast is a prediction based on historical data and patterns. preIt has applications at the strategic, tactical, and operation level. For example, at a strategic level, forecasting is used for projecting return on investment, growth and the effect of innovations. At a tactical level, forecasting is used for projecting costs, inventory requirements, and customer satisfaction. At an operational level, forecasting is used for setting targets and predicting quality and conformance with standards.
In its simplest form, exponential smoothing is a moving average method with a single parameter which models an exponentially decreasing effect of past levels on future values. With a variety of extensions, exponential smoothing covers a broader class of models than other wellknown approaches, such as the BoxJenkins autoregressive integrated moving average (ARIMA) approach. Oracle Machine Learning for SQL implements exponential smoothing using a state of the art state space method that incorporates a single source of error (SSOE) assumption which provides theoretical and performance advantages.

A matrix of models that mix and match error type (additive or multiplicative), trend (additive, multiplicative, or none), and seasonality (additive, multiplicative, or none)

Models with damped trends.

Models that directly handle irregular time series and time series with missing values.

Multiple time series models
See Also:
Ord, J.K., et al, Time Series Forecasting: The Case for the Single Source of Error State Space Approach, Working Paper, Department of Econometrics and Business Statistics, Monash University, VIC 3800, Australia, April 2, 2005.
20.1.1 Exponential Smoothing Models
Exponential Smoothing models are a broad class of forecasting models that are intuitive, flexible, and extensible.
Members of this class include simple, single parameter models that predict the future as a linear combination of a previous level and a current shock. Extensions can include parameters for linear or nonlinear trend, trend damping, simple or complex seasonality, related series, various forms of nonlinearity in the forecasting equations, and handling of irregular time series.
Exponential smoothing assumes that a series extends infinitely into the past, but that influence of past on future, decays smoothly and exponentially fast. The smooth rate of decay is expressed by one or more smoothing constants. The smoothing constants are parameters that the model estimates. The assumption is made practical for modeling real world data by using an equivalent recursive formulation that is only expressed in terms of an estimate of the current level based on prior history and a shock to that estimate dependent on current conditions only.The procedure requires an estimate for the time period just prior to the first observation, that encapsulates all prior history. This initial observation is an additional model parameter whose value is estimated by the modeling procedure.
Components of ESM such as trend and seasonality extensions, can have an additive or multiplicative form. The simpler additive models assume that shock, trend, and seasonality are linear effects within the recursive formulation.
20.1.2 Simple Exponential Smoothing
Simple exponential smoothing assumes the data fluctuates around a stationary mean, with no trend or seasonal pattern.
In a simple Exponential Smoothing model, each forecast (smoothed value) is computed as the weighted average of the previous observations, where the weights decrease exponentially depending on the value of smoothing constant α. Values of the smoothing constant, α, near one, put almost all weight on the most recent observations. Values of α near zero allows the distant past observations to have a large influence.
20.1.3 Models with Trend but No Seasonality
The preferred form of additive (linear) trend is sometimes called Holt’s method or double exponential smoothing.
Models with trend add a smoothing parameter γ and optionally a damping parameter φ. The damping parameter smoothly dampens the influence of past linear trend on future estimates of level, often improving accuracy.
20.1.4 Models with Seasonality but No Trend
When the time series average does not change over time (stationary), but is subject to seasonal fluctuations, the appropriate model has seasonal parameters but no trend.
Seasonal fluctuations are assumed to balance out over periods of length m, where m is the number of seasons, For example, m=4 might be used when the input data are aggregated quarterly. For models with additive errors, the seasonal parameters must sum to zero. For models with multiplicative errors, the product of seasonal parameters must be one.
20.1.5 Models with Trend and Seasonality
Holt and Winters introduced both trend and seasonality in an Exponential Smoothing model.
The original model, also known as HoltWinters or triple exponential smoothing, considered an additive trend and multiplicative seasonality. Extensions include models with various combinations of additive and multiplicative trend, seasonality and error, with and without trend damping.
20.1.6 Prediction Intervals
To compute prediction intervals, an Exponential Smoothing (ESM) model is divided into three classes.
The simplest class is the class of linear models, which include, among others, simple ESM, Holt’s method, and additive HoltWinters. Class 2 models (multiplicative error, additive components) make an approximate correction for violations of the Normality assumption. Class 3 modes use a simple simulation approach to calculate prediction intervals.
20.2 Data Preparation for Exponential Smoothing Models
Learn about preparing the data for an Exponential Smoothing (ESM) model.
To build an ESM model, you must supply the following :

Input data

An aggregation level and method, if the case id is a date type

Partitioning column, if the data are partitioned
In addition, for a greater control over the build process, the user may optionally specify model build parameters, all of which have defaults:

Model

Error type

Optimization criterion

Forecast Window

Confidence level for forecast bounds

Missing value handling

Whether the input series is evenly spaced
Related Topics
See Also:
DBMS_DATA_MINING —Algorithm Settings: Exponential Smoothing Models for a listing and explanation of the available model settings.
Note:
The term hyperparameter is also interchangeably used for model setting.20.2.1 Input Data
Time series analysis requires ordered input data. Hence, each data row must consist of an [index, value] pair, where the index specifies the ordering.
When you create an Exponential Smoothing (ESM) model using the CREATE_MODEL
or the CREATE_MODEL2
procedure, the CASE_ID_COLUMN_NAME
and the TARGET_COLUMN_NAME
parameters are used to specify the columns used to compute the input indices and the observed time series values, respectively. The time column bears Oracle number, or Oracle date, timestamp, timestamp with time zone, or timestamp with local time zone. When the case id column is of type Oracle NUMBER
, the model considers the input time series to be equally spaced. Only the ordinal position matters, with a lower number indicating a later time. In particular, the input time series is sorted based on the value of case_id
(time label). The case_id column cannot contain missing values. To indicate a gap, the value column can contain missing values as NULL
. The magnitude of the difference between adjacent time labels is irrelevant and is not used to calculate the spacing or gap size. Integer numbers passed as CASE_ID
are assumed to be nonnegative.
ESM also supports partitioned models and in such cases, the input table contains an extra column specifying the partition. All [index, value] pairs with the same partition ID form one complete time series. The Exponential Smoothing algorithm constructs models for each partition independently, although all models use the same model settings.
Data properties may result in a warning notice, or settings may be disregarded. If the user sets a model with a multiplicative trend, multiplicative seasonality, or both, and the data contains values Y_{t}<= 0, the model type is set to default. If the series contains fewer values than the number of seasons given by the user, then the seasonality specifications are ignored and a warning is issued.
If the user has selected a list of predictor series using the parameter EXSM_SERIES_LIST
, the input data can also include up to twenty additional time series columns.
Related Topics
20.2.2 Accumulation
For the Exponential Smoothing algorithm, the accumulation procedure is applied when the column is a date type (date
, datetime
, timestamp
, timestamp with timezone
, or timestamp with local timezone
).
The case id can be a NUMBER
column whose sort index represents the position of the value in the time series sequence of values. The case id column can also be a date type. A date type is accumulated in accordance with a user specified accumulation window. Regardless of type, the case id is used to transform the column into an equally spaced time series. No accumulation is applied for a case id of type NUMBER
. As an example, consider a time series about promotion events. The time column contains the date of each event, and the dates can be unequally spaced. The user must specify the spacing interval, which is the spacing of the accumulated or transformed equally spaced time series. In the example, if the user specifies the interval to be month, then an equally spaced time series with profit for each calendar month is generated from the original time series. Setting EXSM_INTERVAL
is used to specify the spacing interval. The user must also specify a value for EXSM_ACCUMULATE
, for example, EXSM_ACCU_MAX
, in which case the equally spaced monthly series would contain the maximum profit over all events that month as the observed time series value.
20.2.3 Missing Value
Input time series can contain missing values. A NULL
entry in the target column indicates a missing value. When the time column is of the type datetime, the accumulation procedure can also introduce missing values. The setting EXSM_SETMISSING
can be used to specify how to handle missing values. The special value EXSM_MISS_AUTO
indicates that, if the series contains missing values it is to be treated as an irregular time series.
Note:
Missing value handling setting must be compatible with model setting, otherwise an error is thrown.
20.2.4 Prediction
An Exponential Smoothing (ESM) model can be applied to make predictions by specifying the prediction window.
Setting EXSM_PREDICTION_STEP
can be used to specify the prediction window. The prediction window is expressed in terms of number of intervals (setting EXSM_INTERVAL
), when the time column is of the type datetime. If the time column is a number then the prediction window is the number of steps to forecast. Regardless of whether the time series is regular or irregular, EXSM_PREDICTION_STEP
specifies the prediction window.
See Also:
Oracle Database PL/SQL Packages and Types Reference for a listing and explanation of the available model settings.Note:
The term hyperparameter is also interchangeably used for model setting.20.2.5 Parallellism by Partition
Oracle Machine Learning for SQL supports parallellism by partition.
For example, a user can choose PRODUCT_ID
as one partition column and can generate forecasts for different products in a model build. Although a distinct smoothing model is built for each partition, all partitions share the same model settings. For example, if setting EXSM_MODEL
is set to EXSM_SIMPLE
, all partition models will be simple Exponential Smoothing models. Time series from different partitions can be distributed to different processes and processed in parallel. The model for each time series is built serially.
20.2.6 Initial Value Optimization
With long seasonal cycles, users can choose not to optimize the ESM model initial values beyond an initial estimate.
This is in contrast to standard ESM optimization, in which the initial values are adjusted during the optimization process to minimize error. Optimizing only the level, trend, and seasonality parameters rather than the initial values can result in significant performance improvements and faster optimization convergence. When domain knowledge indicates that long seasonal variation is a significant contributor to an accurate forecast, this approach is appropriate. Despite the performance benefits, Oracle does not recommend disabling the optimization of the initial values for typical short seasonal cycles because it may result in model overfitting and less reliable confidence bounds.
Related Topics