Review 1A Key#

by Professor Throckmorton
for Time Series Econometrics
W&M ECON 408/PUBP 616
Slides

Stationarity#

Conditions#

Define covariance stationarity, i.e., what are the conditions for a time series to be weakly stationary?

  1. A stable mean/average over time, i.e., \(E(y_t) = \mu\) (is not a function of \(t\)).

  2. The variance, i.e, \(Var(y_t) = E[(y_t - \mu_t)^2] = \sigma^2\), is also constant.

  3. The autocovariance is stable over time, i.e., \(Cov(y_t,y_{t-\tau}) = \gamma(\tau)\) (is not a function of \(t\)). \(\tau\) is known as displacement or lag.

Random Walk#

Write down a random walk and solve for its variance, i.e., \(Var(y_t)\). Given your answer, is a random walk staionary? Why or why not?

  • A (mean \(0\)) random walk is \(y_t = y_{t-1} + \varepsilon_t\). Taking the variance and noting that \(y_{t-1}\) is indepedent from \(\varepsilon_t\)

    \[\begin{align*} Var(y_t) &= Var(y_{t-1} + \varepsilon_t) \\ \rightarrow Var(y_t) &= Var(y_{t-1}) + Var(\varepsilon_t) \\ \rightarrow Var(y_t) &= Var(y_{t-1}) + \sigma^2 \end{align*}\]
  • That structure holds at all points in time

    \[\begin{align*} Var(y_t) &= Var(y_{t-1}) + \sigma^2 \\ Var(y_{t-1}) &= Var(y_{t-2}) + \sigma^2 \\ Var(y_{t-2}) &= Var(y_{t-3}) + \sigma^2 \end{align*}\]
  • Combine, i.e., recursively substitute, to get \(Var(y_t) = Var(y_0) + t\sigma^2\). Thus, the variance is not constant because it is a function of time, which violates one of the stationarity conditions.

  • Or you can prove \(y_t\) is not stationary by contradiction. Assume that \(y_t\) is stationary, then \(Var(y_t) = Var(y_0)\), but that means \(0 = t\sigma^2\), which is not true.

MA Model#

Invertibility#

Consider the MA(1) process: \(y_t = \varepsilon_t + 0.6\varepsilon_{t-1}\). Is this process invertible? Justify your answer.

  • Rearrange to get \(\varepsilon_t = y_t - 0.6 \varepsilon_{t−1}\), which holds at all points in time, e.g.,

    \[\begin{align*} \varepsilon_{t-1} &= y_{t-1} - 0.6 \varepsilon_{t−2} \\ \varepsilon_{t-2} &= y_{t-2} - 0.6 \varepsilon_{t−3} \end{align*}\]
  • Combine these (i.e., recursively substitute) to get an \(AR(\infty)\) Model

    \[ \varepsilon_t = y_t - \theta y_{t-1} - 0.6^2 y_{t-2} - 0.6^3 y_{t-3} - \cdots = y_t - \sum_{j=1}^\infty 0.6^j y_{t-j} \]

    or

    \[ y_t = \sum_{j=1}^\infty 0.6^j y_{t-j} + \varepsilon_t \]
  • \(y_t\) exists/is finite since \(0.6^j\) goes to \(0\) as \(j\) goes to \(\infty\).

Autocovariance#

Write down an MA(\(2\)) model. What is its first autocovariance, \(\gamma(1)\)?

  • A (mean \(0\)) MA(\(2\)) model is \(y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}\)

  • \(\gamma(1) \equiv Cov(y_t,y_{t-1}) = E(y_t y_{t-1}) = E[(\varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2})(\varepsilon_{t-1} + \theta_1 \varepsilon_{t-2} + \theta_2 \varepsilon_{t-3})]\)

  • Note that the \(\varepsilon\)’s are independent over time, then

    \[\begin{align*} \gamma(1) &= E(\theta_1 \varepsilon_{t-1}^2) + E(\theta_1 \theta_2 \varepsilon_{t-2}^2) \\ \rightarrow \gamma(1) &= \theta_1 \sigma^2 + \theta_1 \theta_2 \sigma^2 \\ \rightarrow \gamma(1) &= \theta_1(1 + \theta_2) \sigma^2 \end{align*}\]

AR Model#

Stationarity#

Consider the model \(y_t = 0.5 y_{t-1} - 0.3 y_{t-2} + \varepsilon_t\). Is it stationary? Why or why not?

  • An AR(\(2\)) has the form \(y_t = \phi_1 y_{t-1} + \phi_2 y_{t-2} + \varepsilon_t\) and is stationary if the following conditions are satisfied

    • \(|\phi_2| < 1\)

    • \(\phi_2 + \phi_1 < 1\)

    • \(\phi_2 - \phi_1 < 1\)

  • In this example, \(\phi_1 = 0.5\) and \(\phi_2 = -0.3\), and

    • \(|-0.3| = 0.3 < 1\)

    • \(-0.3 + 0.5 = 0.2 < 1\)

    • \(-0.3 - 0.5 = -0.8 < 1\)

  • Thus, \(y_t\) is stationary since its parameters satisfy the above conditions.

Causality#

Show that \(y_t = 0.7 y_{t-1} + \varepsilon_t\) is causal.

  • The AR(1) model structure is the same at all points in time

    \[\begin{align*} y_t &= 0.7 y_{t-1} + \varepsilon_t \\ y_{t-1} &= 0.7 y_{t-2} + \varepsilon_{t-1} \\ y_{t-2} &= 0.7 y_{t-3} + \varepsilon_{t-2} \end{align*}\]
  • Combine, i.e., recursively substitute, to get \(y_t = 0.7^3 y_{t-3} + 0.7^2 \varepsilon_{t-2}+ 0.7 \varepsilon_{t-1} + \varepsilon_t\)

  • Rinse and repeat to get \(y_t = \sum_{j=0}^\infty 0.7^j \varepsilon_{t-j}\) (i.e., the MA(\(\infty\)) Model or Wold Representation)

  • \(y_t\) exists/is finite since \(0.7^j\) goes to \(0\) as \(j\) goes to \(\infty\).

ARMA Model#

ARMA(\(1,1\)) \(\rightarrow\) \(AR(\infty)\)#

Show that an ARMA(\(1,1\)) process can be rewritten as an AR(\(\infty\)). Find the first three AR coefficients.

  • An ARMA(\(1,1\)) is \(y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1}\), which holds at all points in time

    \[\begin{align*} \varepsilon_t &= y_t - \phi y_{t-1} - \theta \varepsilon_{t-1} \\ \varepsilon_{t-1} &= y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2} \\ \varepsilon_{t-2} &= y_{t-2} - \phi y_{t-3} - \theta \varepsilon_{t-3} \\ \varepsilon_{t-3} &= y_{t-3} - \phi y_{t-4} - \theta \varepsilon_{t-4} \end{align*}\]
  • Combine, i.e., recursively substitute, to get

    \[\begin{align*} \varepsilon_t &= y_t - \phi y_{t-1} - \theta (y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2}) \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta \phi y_{t-2} + \theta^2 \varepsilon_{t-2} \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 \phi y_{t-3} - \theta^3 \varepsilon_{t-3} \\ \rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 (\phi + \theta) \phi y_{t-3} + \theta^3 \phi y_{t-4} + \theta^4 \varepsilon_{t-4} \end{align*}\]

    Thus, the first \(3\) AR coefficients are \(- (\phi + \theta)\), \(\theta(\phi + \theta)\), and \(- \theta^2(\phi + \theta)\)

  • Using intuition to convert the ARMA(\(1,1\)) to an AR(\(\infty\)) yields

    \[ \varepsilon_t = y_t - (\phi + \theta) \sum_{j=0}^\infty (-\theta)^j y_{t-j-1} \]

    We can verify that this expression correctly reproduces the first \(3\) AR coefficients above.

Variance#

Find the variance of an ARMA(1,1) process.

  • An ARMA(\(1,1\)) is \(y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1}\). Taking \(Var\) directly would be tricky because of the dependent terms.

  • \(\gamma(0) \equiv E[(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})]\)

    \[\begin{align*} &= E(\phi^2 y_{t-1}^2) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + E(\varepsilon_t^2) + \theta^2 E(\varepsilon_{t-1}^2) \\ &= \phi^2 \gamma(0) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + (1 + \theta^2)\sigma^2 \end{align*}\]
  • Note that \(E[y_{t-1} \varepsilon_{t-1}] = \sigma^2\), thus

    \[\begin{align*} \gamma(0) &= \phi^2 \gamma(0) + 2\phi\theta \sigma^2 + (1 + \theta^2)\sigma^2 \\ \rightarrow \gamma(0)(1 -\phi^2) &= (1 + 2\phi\theta + \theta^2)\sigma^2 \\ \rightarrow \gamma(0) &= (1 + 2\phi\theta + \theta^2)\sigma^2 /(1 -\phi^2) \end{align*}\]

ARIMA Model#

Differencing#

Suppose you have the ARIMA(\(1,1,0\)) model \(\Delta y_t = 0.5 \Delta y_{t-1} + \varepsilon_t\). Rewrite it in its original (non-differenced) form.

  • \(\Delta y_t = y_t - y_{t-1}\) and \(\Delta y_{t-1} = y_{t-1} - y_{t-2}\)

  • Substituting that in we get

    \[ y_t - y_{t-1} = 0.5 (y_{t-1} - y_{t-2}) + \varepsilon_t \]
  • Rearranging yields an AR(\(2\)):

    \[ y_t = 1.5 y_{t-1} - 0.5 y_{t-2} + \varepsilon_t \]

Integration#

Explain how you would determine the order of integration \(d\) for a time series.

  1. Visually inspect the raw data for trends and seasonality. If there are either time trends or seasonality, remove them by differencing (first difference).

  2. Plot the ACF and do a unit root test on the transformed data. If the it appears the data is non-stationary, take a difference.

  3. Repeat step 2. Count the number of times the data was differenced to acheive stationarity.

Unit Root Test#

Write down the null and alternative hypotheses of the augmented Dickey-Fuller test (ADF test). You run an ADF test on a time series and obtain a test statistic of -1.8. If the critical value at the 5% level is -2.86, what is your conclusion?

  • The augmented Dickey-Fuller test (ADF test) has hypotheses

    • \(h_0\): The time series has a unit root, indicating it is non-stationary.

    • \(h_A\): The time series does not have a unit root, suggesting it is stationary.

  • To reject the null hypothesis we would need a test statistic that is less than the critical value of \(-2.86\). Since \(-1.8\) is not less than the critical value, then we fail to reject the null hypothesis. The time series probably has a unit root.