Chapter 3 Decomposition

Tyler George

Cornell College
STA 364 Spring 2025 Block 5

Setup

library(purrr)
library(gganimate)
library(latex2exp)
library(fpp3)

Transformations and adjustments

Per capita adjustments

global_economy |>
  filter(Country == "Australia") |>
  autoplot(GDP)

Per capita adjustments

global_economy |>
  filter(Country == "Australia") |>
  autoplot(GDP / Population)

You Try!

Consider the GDP information in global_economy. Plot the GDP per capita for each country over time. Which country has the highest GDP per capita? How has this changed over time? –>

Inflation adjustments

  • Data which are affected by the value of money are best adjusted before modelling.
  • For example, the average cost of a new house will have increased over the last few decades due to inflation. A $200,000 house this year is not the same as a $200,000 house twenty years ago.

Inflation adjustments

  • To make these adjustments, a price index is used. If \(z_t\) denotes the price index and \(y_t\) denotes the original house price in year \(t\), then

\[x_t=y_t/z_t \cdot z_{2000}\]

gives the adjusted house price at year 2000 dollar values.

  • Price indexes are often constructed by government agencies. For consumer goods, a common price index is the Consumer Price Index (or CPI).

Inflation adjustments

aus_newspaper_retail <- readr::read_csv('data/aus_newspaper_retail.csv')
# Turnover: Retail turnover in $Million AUD

aus_newspaper_retail <- aus_newspaper_retail |>
  select(Year,Turnover,name) |> # Picks out these 3 columns/variables
  group_by(Year,name) |> # Tells are to now calculate by groups by year and name
  summarise(sum_Turnover = sum(Turnover))|> #adds up Turnover for each year and name combination
  as_tsibble(index = Year, key = "name") # Converts to time series

aus_newspaper_retail |> autoplot(sum_Turnover) +
  facet_grid(name ~ ., scales = "free_y") + # This will make a grid of plots
    # with name used to break into multiple plots along the y direction
  labs(title = "Turnover: Australian print media industry", y = "$AU")

Mathematical transformations

If the data show different variation at different levels of the series, then a transformation can be useful.

Denote original observations as \(y_1,\dots,y_T\) and transformed observations as \(w_1, \dots, w_T\).

Function Impact
Square root \(w_t = \sqrt{y_t}\) \(\downarrow\)
Cube root \(w_t = \sqrt[3]{y_t}\) Increasing
Logarithm \(w_t = \log(y_t)\) strength

Logarithms, in particular, are useful because they are more interpretable: changes in a log value are relative (percent) changes on the original scale.

Mathematical transformations (1/4)

food <- aus_retail |>
  filter(Industry == "Food retailing") |>
  summarise(Turnover = sum(Turnover))

Mathematical transformations (2/4)

food |> autoplot(sqrt(Turnover)) +
  labs(y = "Square root turnover")

Mathematical transformations (3/4)

food |> autoplot(Turnover^(1/3)) +
  labs(y = "Cube root turnover")

Mathematical transformations (4/4)

food |> autoplot(log(Turnover)) +
  labs(y = "Log turnover")

(log here is the natural log, base \(e\))

Box-Cox transformations

Each of these transformations is close to a member of the family of :

\[w_t = \left\{\begin{array}{ll} \log(y_t), & \quad \lambda = 0; \\ (sign(y_t)|y_t|^\lambda-1)/\lambda , & \quad \lambda \ne 0. \end{array}\right.\]

  • Actually the Bickel-Doksum transformation (allowing for \(y_t<0\))
  • \(\lambda=1\): (No substantive transformation)
  • \(\lambda=\frac12\): (Square root plus linear transformation)
  • \(\lambda=0\): (Natural logarithm)

Box-Cox transformations

Box-Cox transformations

food |>
  features(Turnover, features = guerrero)
# A tibble: 1 × 1
  lambda_guerrero
            <dbl>
1          0.0895
  • This attempts to balance the seasonal fluctuations and random variation across the series.
  • Always check the results.
  • A low value of \(\lambda\) can give extremely large prediction intervals.

Box-Cox transformations

food |> autoplot(box_cox(Turnover, 0.0524)) +
  labs(y = "Box-Cox transformed turnover")

Transformations

  • Often no transformation needed.
  • Simple transformations are easier to explain and work well enough.
  • Transformations can have very large effect on PI.
  • If some data are zero or negative, then use \(\lambda>0\).
  • log1p() can also be useful for data with zeros.
  • Choosing logs is a simple way to force forecasts to be positive
  • Transformations must be reversed to obtain forecasts on the original scale. (Handled automatically by fable.)

You Try!

  1. For the following series, find an appropriate transformation in order to stabilise the variance.

    • United States GDP from global_economy
    • Slaughter of Victorian “Bulls, bullocks and steers” in aus_livestock
    • Victorian Electricity Demand from vic_elec.
    • Gas production from aus_production
  2. Why is a Box-Cox transformation unhelpful for the canadian_gas data?

Time series components

Time series patterns

Recall

Trend
pattern exists when there is a long-term increase or decrease in the data.
Cyclic
pattern exists when data exhibit rises and falls that are not of fixed period (duration usually of at least 2 years).
Seasonal
pattern exists when a series is influenced by seasonal factors (e.g., the quarter of the year, the month, or day of the week).

Time series decomposition

\(y_t = f(S_t, T_t, R_t)\)

where \(y_t=\), data at period \(t\)

\(T_t=\), trend-cycle component at period \(t\)

\(S_t=\), seasonal component at period \(t\)

\(R_t=\),remainder component at period \(t\)

Additive decomposition: \(y_t = S_t + T_t + R_t.\)

Multiplicative decomposition: \(y_t = S_t \times T_t \times R_t.\)

Time series decomposition

  • Additive model appropriate if magnitude of seasonal fluctuations does not vary with level.
  • If seasonal are proportional to level of series, then multiplicative model appropriate.
  • Multiplicative decomposition more prevalent with economic series
  • Alternative: use a Box-Cox transformation, and then use additive decomposition.
  • Logs turn multiplicative relationship into an additive relationship:

\[y_t = S_t \times T_t \times R_t \quad\Rightarrow\quad \log y_t = \log S_t + \log T_t + \log R_t.\]

US Retail Employment

us_retail_employment <- us_employment |>
  filter(year(Month) >= 1990, Title == "Retail Trade") |>
  select(-Series_ID)
us_retail_employment
# A tsibble: 357 x 3 [1M]
      Month Title        Employed
      <mth> <chr>           <dbl>
 1 1990 Jan Retail Trade   13256.
 2 1990 Feb Retail Trade   12966.
 3 1990 Mar Retail Trade   12938.
 4 1990 Apr Retail Trade   13012.
 5 1990 May Retail Trade   13108.
 6 1990 Jun Retail Trade   13183.
 7 1990 Jul Retail Trade   13170.
 8 1990 Aug Retail Trade   13160.
 9 1990 Sep Retail Trade   13113.
10 1990 Oct Retail Trade   13185.
# ℹ 347 more rows

US Retail Employment

us_retail_employment |>
  autoplot(Employed) +
  labs(y="Persons (thousands)", title="Total employment in US retail")

US Retail Employment

us_retail_employment |>
  model(stl = STL(Employed))
# A mable: 1 x 1
      stl
  <model>
1   <STL>

US Retail Employment

dcmp <- us_retail_employment |>
  model(stl = STL(Employed))
components(dcmp)
# A dable: 357 x 7 [1M]
# Key:     .model [1]
# :        Employed = trend + season_year + remainder
   .model    Month Employed  trend season_year remainder season_adjust
   <chr>     <mth>    <dbl>  <dbl>       <dbl>     <dbl>         <dbl>
 1 stl    1990 Jan   13256. 13288.      -33.0      0.836        13289.
 2 stl    1990 Feb   12966. 13269.     -258.     -44.6          13224.
 3 stl    1990 Mar   12938. 13250.     -290.     -22.1          13228.
 4 stl    1990 Apr   13012. 13231.     -220.       1.05         13232.
 5 stl    1990 May   13108. 13211.     -114.      11.3          13223.
 6 stl    1990 Jun   13183. 13192.      -24.3     15.5          13207.
 7 stl    1990 Jul   13170. 13172.      -23.2     21.6          13193.
 8 stl    1990 Aug   13160. 13151.       -9.52    17.8          13169.
 9 stl    1990 Sep   13113. 13131.      -39.5     22.0          13153.
10 stl    1990 Oct   13185. 13110.       61.6     13.2          13124.
# ℹ 347 more rows

US Retail Employment

us_retail_employment |>
  autoplot(Employed, color='gray') +
  autolayer(components(dcmp), trend, color='#523178') +
  labs(y="Persons (thousands)", title="Total employment in US retail")

US Retail Employment

components(dcmp) |> autoplot()

US Retail Employment

components(dcmp) |> gg_subseries(season_year)

Seasonal adjustment

  • Useful by-product of decomposition: an easy way to calculate seasonally adjusted data.
  • Additive decomposition: seasonally adjusted data given by \[y_t - S_t = T_t + R_t\]
  • Multiplicative decomposition: seasonally adjusted data given by \[y_t / S_t = T_t \times R_t\]

US Retail Employment

us_retail_employment |>
  autoplot(Employed, color='gray') +
  autolayer(components(dcmp), season_adjust, color='#0072B2') +
  labs(y="Persons (thousands)", title="Total employment in US retail")

Seasonal adjustment

  • We use estimates of \(S\) based on past values to seasonally adjust a current value.
  • Seasonally adjusted series reflect remainders as well as trend. Therefore they are not “smooth” and “downturns” or “upturns” can be misleading.
  • It is better to use the trend-cycle component to look for turning points.

Moving Averages (MA)

Moving Averages

  • The classical method of time series decomposition originated in the 1920s and was widely used until the 1950s.
  • A moving average of order \(m\) can be written as \[\hat{T}_t = \frac{1}{m}\Sigma_{j=-k}^k y_{t+j}\] where \(m = 2k+1\).
  • That is, the estimate of the trend-cycle at time \(t\) is obtained by averaging values of the time series within \(k\) periods of \(t\).

MA

  • Averaging eliminates some of the randomness in the data, leaving a smooth trend-cycle component.
  • Called an m-MA, or MA(m), meaning a moving average of order m.

MA global economy

global_economy |>
  filter(Country == "Australia") |>
  autoplot(Exports) +
  labs(y = "% of GDP", title = "Total Australian exports")

MA global economy

  • A moving average is a an average that moves…
  • MA(5) would mean we use the average of two time series values before \(y_t\), \(y_t\) itself, and the two values after \(y_t\).
  • Ex: to get the MA(5) for \(y_{100}\) we would use the average of \(y_{98},y_{99},y_{100},y_{101}\) and \(y_{102}\).
  • What is a limitation of this method?

MA global economy sliding window

  • slide_dbl() from the slider package applies a function to “sliding” time windows. In this case, we use the mean() function with a window of size 5.
aus_exports <- global_economy |>
  filter(Country == "Australia") |>
  mutate(
    `5-MA` = slider::slide_dbl(Exports, mean,
                .before = 2, .after = 2, .complete = TRUE)
  )

MA trend line

aus_exports |>
  autoplot(Exports) +
  geom_line(aes(y = `5-MA`), colour = "#D55E00") +
  labs(y = "% of GDP",
       title = "Total Australian exports") 

History of time series decomposition

History of time series decomposition

  • Classical method originated in 1920s.
  • Census II method introduced in 1957. Basis for X-11 method and variants (including X-12-ARIMA, X-13-ARIMA)
  • STL method introduced in 1983
  • TRAMO/SEATS introduced in 1990s.

National Statistics Offices

  • ABS uses X-12-ARIMA
  • US Census Bureau uses X-13ARIMA-SEATS
  • Statistics Canada uses X-12-ARIMA
  • ONS (UK) uses X-12-ARIMA
  • EuroStat use X-13ARIMA-SEATS

X-11 decomposition

Advantages

  • Relatively robust to outliers
  • Completely automated choices for trend and seasonal changes
  • Very widely tested on economic data over a long period of time.

Disadvantages

  • No prediction/confidence intervals
  • Ad hoc method with no underlying model
  • Only developed for quarterly and monthly data

Extensions: X-12-ARIMA and X-13-ARIMA

  • The X-11, X-12-ARIMA and X-13-ARIMA methods are based on Census II decomposition.
  • These allow adjustments for trading days and other explanatory variables.
  • Known outliers can be omitted.
  • Level shifts and ramp effects can be modelled.
  • Missing values estimated and replaced.
  • Holiday factors (e.g., Easter, Labour Day) can be estimated.

X-13ARIMA-SEATS

Advantages

  • Model-based
  • Smooth trend estimate
  • Allows estimates at end points
  • Allows changing seasonality
  • Developed for economic data

Disadvantages

  • Only developed for quarterly and monthly data

STL decomposition

LOESS?

https://www.youtube.com/watch?v=Vf7oJ6z2LCc

STL decomposition

  • STL: “Seasonal and Trend decomposition using Loess”
  • Very versatile and robust.
  • Unlike X-12-ARIMA, STL will handle any type of seasonality.
  • Seasonal component allowed to change over time, and rate of change controlled by user.
  • Smoothness of trend-cycle also controlled by user.
  • Robust to outliers
  • Not trading day or calendar adjustments.
  • Only additive.
  • Take logs to get multiplicative decomposition.
  • Use Box-Cox transformations to get other decompositions.

STL decomposition

us_retail_employment |>
  model(STL(Employed ~ season(window=9), robust=TRUE)) |>
  components() |> autoplot() +
    labs(title = "STL decomposition: US retail employment")

STL decomposition

STL decomposition

us_retail_employment |>
  model(STL(Employed ~ season(window=5))) |>
  components()

us_retail_employment |>
  model(STL(Employed ~ trend(window=15) +
                       season(window="periodic"),
            robust = TRUE)
  ) |> components()
  • trend(window = ?) controls wiggliness of trend component.
  • season(window = ?) controls variation on seasonal component.
  • season(window = 'periodic') is equivalent to an infinite window.

STL decomposition

us_retail_employment |>
  model(STL(Employed)) |>
  components() |> autoplot()

Note

STL() chooses by default. This can include transformations.

STL decomposition

  • Algorithm that updates trend and seasonal components iteratively.
  • Starts with \(\hat{T}_t=0\)
  • Uses a mixture of loess and moving averages to successively refine the trend and seasonal estimates.
  • The trend window controls loess bandwidth applied to deasonalised values.
  • The season window controls loess bandwidth applied to detrended subseries.
  • Robustness weights based on remainder.
  • Default season window = 13
  • Default trend window = nextodd( ceiling((1.5*period)/(1-(1.5/s.window)))