Exponential smoothing dates to the 1950s and 1960s, with foundational contributions from Brown (1959), Holt (1957), and Winters (1960). Brown introduced simple exponential smoothing for inventory demand forecasting at the U.S. Navy; Holt extended it to handle linear trends; Winters added a seasonal component. For decades, these methods existed as ad hoc recursive updating rules without a formal statistical model behind them. The revolution came when Ord, Koehler, and Snyder (1997) and Hyndman, Koehler, Snyder, and Gahber (2002) recast exponential smoothing as a family of state-space models with explicit error distributions, creating what is now called the ETS (Error, Trend, Seasonal) framework. This gave exponential smoothing a proper likelihood function, principled model selection via information criteria, and correct prediction intervals.
The core mechanism is recursive state updating. At each time step, the model observes the new data point, computes a one-step-ahead prediction error (the 'innovation'), and uses that error to update three state components: the level (the current underlying value of the series), the trend (the current rate of change), and the seasonal component (the current seasonal factor for this period within the cycle). The smoothing parameters alpha, beta, and gamma control how much weight the innovation receives in each update -- high values mean the states react quickly to new information, low values mean they change slowly. The key innovation of the ETS framework is that these updating equations are not just rules of thumb; they are the transition equations of a state-space model whose measurement equation links the states to the observed data through either additive or multiplicative composition.
ETS is one of the two dominant univariate forecasting frameworks in applied statistics, alongside ARIMA/SARIMA. In the M3 forecasting competition (2000, 3,003 series), automatic exponential smoothing methods ranked among the top performers, and in the M4 competition (2018, 100,000 series), ETS remained competitive with machine learning approaches on monthly and quarterly macro data. The forecast package for R and the statsforecast library for Python provide fully automatic ETS model selection via AICc, covering all 30 model variants in the ETS taxonomy. Central banks and statistical agencies that use ARIMA as their primary framework often run ETS in parallel as a robustness check -- the ECB's forecasting platform and the Reserve Bank of Australia's MARTIN model both maintain ETS benchmarks alongside ARIMA baselines.
The ETS taxonomy classifies 30 models by three design choices: the error structure (Additive or Multiplicative), the trend structure (None, Additive, Additive-damped, Multiplicative, Multiplicative-damped), and the seasonal structure (None, Additive, Multiplicative). Each combination yields a distinct state-space model with its own likelihood, forecast function, and prediction interval formula. Of these 30, roughly 15 are commonly used in practice; several multiplicative-trend variants are unstable and excluded from automatic selection. The naming convention uses a three-letter code: ETS(A,A,M) means Additive error, Additive trend, Multiplicative seasonality.