Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Review Key

by Professor Throckmorton
for Time Series Econometrics
W&M ECON 408

Stationarity

Conditions

Define covariance stationarity, i.e., what are the conditions for a time series to be weakly stationary?

  1. A stable mean/average over time, i.e., E(yt)=μE(y_t) = \mu (is not a function of tt).

  2. The variance, i.e, Var(yt)=E[(ytμt)2]=σ2Var(y_t) = E[(y_t - \mu_t)^2] = \sigma^2, is also constant.

  3. The autocovariance is stable over time, i.e., Cov(yt,ytτ)=γ(τ)Cov(y_t,y_{t-\tau}) = \gamma(\tau) (is not a function of tt). τ\tau is known as displacement or lag.

Random Walk

Write down a random walk and solve for its variance, i.e., Var(yt)Var(y_t). Given your answer, is a random walk stationary? Why or why not?

  • A (mean 0) random walk is yt=yt1+εty_t = y_{t-1} + \varepsilon_t. Taking the variance and noting that yt1y_{t-1} is indepedent from εt\varepsilon_t

    Var(yt)=Var(yt1+εt)Var(yt)=Var(yt1)+Var(εt)Var(yt)=Var(yt1)+σ2\begin{align*} Var(y_t) &= Var(y_{t-1} + \varepsilon_t) \\ \rightarrow Var(y_t) &= Var(y_{t-1}) + Var(\varepsilon_t) \\ \rightarrow Var(y_t) &= Var(y_{t-1}) + \sigma^2 \end{align*}
  • That structure holds at all points in time

    Var(yt)=Var(yt1)+σ2Var(yt1)=Var(yt2)+σ2Var(yt2)=Var(yt3)+σ2\begin{align*} Var(y_t) &= Var(y_{t-1}) + \sigma^2 \\ Var(y_{t-1}) &= Var(y_{t-2}) + \sigma^2 \\ Var(y_{t-2}) &= Var(y_{t-3}) + \sigma^2 \end{align*}
  • Combine, i.e., recursively substitute, to get Var(yt)=Var(y0)+tσ2Var(y_t) = Var(y_0) + t\sigma^2. Thus, the variance is not constant because it is a function of time, which violates one of the stationarity conditions.

  • Or you can prove yty_t is not stationary by contradiction. Assume that yty_t is stationary, then Var(yt)=Var(y0)Var(y_t) = Var(y_0), but that means 0=tσ20 = t\sigma^2, which is not true.

MA Model

Invertibility

Consider the MA(1) process: yt=εt+0.6εt1y_t = \varepsilon_t + 0.6\varepsilon_{t-1}. Is this process invertible? Justify your answer.

  • Rearrange to get εt=yt0.6εt1\varepsilon_t = y_t - 0.6 \varepsilon_{t−1}, which holds at all points in time, e.g.,

    εt1=yt10.6εt2εt2=yt20.6εt3\begin{align*} \varepsilon_{t-1} &= y_{t-1} - 0.6 \varepsilon_{t−2} \\ \varepsilon_{t-2} &= y_{t-2} - 0.6 \varepsilon_{t−3} \end{align*}
  • Combine these (i.e., recursively substitute) to get an AR()AR(\infty) Model

    εt=yt0.6yt1+0.62yt20.63yt3=yt+j=1(0.6)jytj\varepsilon_t = y_t - 0.6 y_{t-1} + 0.6^2 y_{t-2} - 0.6^3 y_{t-3} - \cdots = y_t + \sum_{j=1}^\infty (-0.6)^j y_{t-j}

    or

    yt=j=1(0.6)jytj+εty_t = \sum_{j=1}^\infty (-0.6)^j y_{t-j} + \varepsilon_t
  • yty_t exists/is finite since (0.6)j(-0.6)^j goes to 0 as jj goes to \infty.

Autocovariance

Write down an MA(2) model. What is its first autocovariance, γ(1)\gamma(1)?

  • A (mean 0) MA(2) model is yt=εt+θ1εt1+θ2εt2y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}

  • γ(1)Cov(yt,yt1)=E(ytyt1)=E[(εt+θ1εt1+θ2εt2)(εt1+θ1εt2+θ2εt3)]\gamma(1) \equiv Cov(y_t,y_{t-1}) = E(y_t y_{t-1}) = E[(\varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2})(\varepsilon_{t-1} + \theta_1 \varepsilon_{t-2} + \theta_2 \varepsilon_{t-3})]

  • Note that the ε\varepsilon’s are independent over time, then

    γ(1)=E(θ1εt12)+E(θ1θ2εt22)γ(1)=θ1σ2+θ1θ2σ2γ(1)=θ1(1+θ2)σ2\begin{align*} \gamma(1) &= E(\theta_1 \varepsilon_{t-1}^2) + E(\theta_1 \theta_2 \varepsilon_{t-2}^2) \\ \rightarrow \gamma(1) &= \theta_1 \sigma^2 + \theta_1 \theta_2 \sigma^2 \\ \rightarrow \gamma(1) &= \theta_1(1 + \theta_2) \sigma^2 \end{align*}

AR Model

Causality

Show that yt=0.7yt1+εty_t = 0.7 y_{t-1} + \varepsilon_t is causal.

  • The AR(1) model structure is the same at all points in time

    yt=0.7yt1+εtyt1=0.7yt2+εt1yt2=0.7yt3+εt2\begin{align*} y_t &= 0.7 y_{t-1} + \varepsilon_t \\ y_{t-1} &= 0.7 y_{t-2} + \varepsilon_{t-1} \\ y_{t-2} &= 0.7 y_{t-3} + \varepsilon_{t-2} \end{align*}
  • Combine, i.e., recursively substitute, to get yt=0.73yt3+0.72εt2+0.7εt1+εty_t = 0.7^3 y_{t-3} + 0.7^2 \varepsilon_{t-2}+ 0.7 \varepsilon_{t-1} + \varepsilon_t

  • Rinse and repeat to get yt=j=00.7jεtjy_t = \sum_{j=0}^\infty 0.7^j \varepsilon_{t-j} (i.e., the MA(\infty) Model or Wold Representation)

  • yty_t exists/is finite since 0.7j0.7^j goes to 0 as jj goes to \infty.

ARMA Model

ARMA(1,11,1) \rightarrow AR()AR(\infty)

Show that an ARMA(1,11,1) process can be rewritten as an AR(\infty). Find the first three AR coefficients.

  • An ARMA(1,11,1) is yt=ϕyt1+εt+θεt1y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1}, which holds at all points in time

    εt=ytϕyt1θεt1εt1=yt1ϕyt2θεt2εt2=yt2ϕyt3θεt3εt3=yt3ϕyt4θεt4\begin{align*} \varepsilon_t &= y_t - \phi y_{t-1} - \theta \varepsilon_{t-1} \\ \varepsilon_{t-1} &= y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2} \\ \varepsilon_{t-2} &= y_{t-2} - \phi y_{t-3} - \theta \varepsilon_{t-3} \\ \varepsilon_{t-3} &= y_{t-3} - \phi y_{t-4} - \theta \varepsilon_{t-4} \end{align*}
  • Combine, i.e., recursively substitute, to get

    εt=ytϕyt1θ(yt1ϕyt2θεt2)εt=yt(ϕ+θ)yt1+θϕyt2+θ2εt2εt=yt(ϕ+θ)yt1+θ(ϕ+θ)yt2θ2ϕyt3θ3εt3εt=yt(ϕ+θ)yt1+θ(ϕ+θ)yt2θ2(ϕ+θ)ϕyt3+θ3ϕyt4+θ4εt4\begin{align*} \varepsilon_t &= y_t - \phi y_{t-1} - \theta (y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2}) \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta \phi y_{t-2} + \theta^2 \varepsilon_{t-2} \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 \phi y_{t-3} - \theta^3 \varepsilon_{t-3} \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 (\phi + \theta) \phi y_{t-3} + \theta^3 \phi y_{t-4} + \theta^4 \varepsilon_{t-4} \end{align*}

    Thus, the first 3 AR coefficients are (ϕ+θ)- (\phi + \theta), θ(ϕ+θ)\theta(\phi + \theta), and θ2(ϕ+θ)- \theta^2(\phi + \theta)

  • Using intuition to convert the ARMA(1,11,1) to an AR(\infty) yields

    εt=yt(ϕ+θ)j=0(θ)jytj1\varepsilon_t = y_t - (\phi + \theta) \sum_{j=0}^\infty (-\theta)^j y_{t-j-1}

    We can verify that this expression correctly reproduces the first 3 AR coefficients above.

Variance

Find the variance of an ARMA(1,1) process.

  • An ARMA(1,11,1) is yt=ϕyt1+εt+θεt1y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1}. Taking VarVar directly would be tricky because of the dependent terms.

  • γ(0)E[(ϕyt1+εt+θεt1)(ϕyt1+εt+θεt1)]\gamma(0) \equiv E[(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})]

    =E(ϕ2yt12)+2ϕθE[yt1εt1]+E(εt2)+θ2E(εt12)=ϕ2γ(0)+2ϕθE[yt1εt1]+(1+θ2)σ2\begin{align*} &= E(\phi^2 y_{t-1}^2) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + E(\varepsilon_t^2) + \theta^2 E(\varepsilon_{t-1}^2) \\ &= \phi^2 \gamma(0) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + (1 + \theta^2)\sigma^2 \end{align*}
  • Note that E[yt1εt1]=σ2E[y_{t-1} \varepsilon_{t-1}] = \sigma^2, thus

    γ(0)=ϕ2γ(0)+2ϕθσ2+(1+θ2)σ2γ(0)(1ϕ2)=(1+2ϕθ+θ2)σ2γ(0)=(1+2ϕθ+θ2)σ2/(1ϕ2)\begin{align*} \gamma(0) &= \phi^2 \gamma(0) + 2\phi\theta \sigma^2 + (1 + \theta^2)\sigma^2 \\ \rightarrow \gamma(0)(1 -\phi^2) &= (1 + 2\phi\theta + \theta^2)\sigma^2 \\ \rightarrow \gamma(0) &= (1 + 2\phi\theta + \theta^2)\sigma^2 /(1 -\phi^2) \end{align*}

ARIMA Model

Differencing

Suppose you have the ARIMA(1,1,01,1,0) model Δyt=0.5Δyt1+εt\Delta y_t = 0.5 \Delta y_{t-1} + \varepsilon_t. Rewrite it in its original (non-differenced) form.

  • Δyt=ytyt1\Delta y_t = y_t - y_{t-1} and Δyt1=yt1yt2\Delta y_{t-1} = y_{t-1} - y_{t-2}

  • Substituting that in we get

    ytyt1=0.5(yt1yt2)+εty_t - y_{t-1} = 0.5 (y_{t-1} - y_{t-2}) + \varepsilon_t
  • Rearranging yields an AR(2):

    yt=1.5yt10.5yt2+εty_t = 1.5 y_{t-1} - 0.5 y_{t-2} + \varepsilon_t

Integration

Explain how you would determine the order of integration dd for a time series.

  1. Visually inspect the raw data for trends and seasonality. If there are either time trends or seasonality, remove them by differencing (first difference).

  2. Visually inspect the differenced data for a constant mean, variance, and covariance. Plot the ACF. If it still appears the data is non-stationary, take another difference.

  3. Repeat step 2. Count the number of times the data was differenced to achieve stationarity, but differencing more than twice is unusual.