Review Key
by Professor Throckmorton for Time Series Econometrics W&M ECON 408
Stationarity ¶ Conditions ¶ Define covariance stationarity, i.e., what are the conditions for a time series to be weakly stationary?
A stable mean/average over time, i.e., E ( y t ) = μ E(y_t) = \mu E ( y t ) = μ (is not a function of t t t ).
The variance, i.e, V a r ( y t ) = E [ ( y t − μ t ) 2 ] = σ 2 Var(y_t) = E[(y_t - \mu_t)^2] = \sigma^2 Va r ( y t ) = E [( y t − μ t ) 2 ] = σ 2 , is also constant.
The autocovariance is stable over time, i.e., C o v ( y t , y t − τ ) = γ ( τ ) Cov(y_t,y_{t-\tau}) = \gamma(\tau) C o v ( y t , y t − τ ) = γ ( τ ) (is not a function of t t t ). τ \tau τ is known as displacement or lag.
Random Walk ¶ Write down a random walk and solve for its variance, i.e., V a r ( y t ) Var(y_t) Va r ( y t ) . Given your answer, is a random walk stationary? Why or why not?
A (mean 0) random walk is y t = y t − 1 + ε t y_t = y_{t-1} + \varepsilon_t y t = y t − 1 + ε t . Taking the variance and noting that y t − 1 y_{t-1} y t − 1 is indepedent from ε t \varepsilon_t ε t
V a r ( y t ) = V a r ( y t − 1 + ε t ) → V a r ( y t ) = V a r ( y t − 1 ) + V a r ( ε t ) → V a r ( y t ) = V a r ( y t − 1 ) + σ 2 \begin{align*}
Var(y_t) &= Var(y_{t-1} + \varepsilon_t) \\
\rightarrow Var(y_t) &= Var(y_{t-1}) + Var(\varepsilon_t) \\
\rightarrow Var(y_t) &= Var(y_{t-1}) + \sigma^2
\end{align*} Va r ( y t ) → Va r ( y t ) → Va r ( y t ) = Va r ( y t − 1 + ε t ) = Va r ( y t − 1 ) + Va r ( ε t ) = Va r ( y t − 1 ) + σ 2 That structure holds at all points in time
V a r ( y t ) = V a r ( y t − 1 ) + σ 2 V a r ( y t − 1 ) = V a r ( y t − 2 ) + σ 2 V a r ( y t − 2 ) = V a r ( y t − 3 ) + σ 2 \begin{align*}
Var(y_t) &= Var(y_{t-1}) + \sigma^2 \\
Var(y_{t-1}) &= Var(y_{t-2}) + \sigma^2 \\
Var(y_{t-2}) &= Var(y_{t-3}) + \sigma^2
\end{align*} Va r ( y t ) Va r ( y t − 1 ) Va r ( y t − 2 ) = Va r ( y t − 1 ) + σ 2 = Va r ( y t − 2 ) + σ 2 = Va r ( y t − 3 ) + σ 2 Combine, i.e., recursively substitute, to get V a r ( y t ) = V a r ( y 0 ) + t σ 2 Var(y_t) = Var(y_0) + t\sigma^2 Va r ( y t ) = Va r ( y 0 ) + t σ 2 . Thus, the variance is not constant because it is a function of time, which violates one of the stationarity conditions.
Or you can prove y t y_t y t is not stationary by contradiction. Assume that y t y_t y t is stationary, then V a r ( y t ) = V a r ( y 0 ) Var(y_t) = Var(y_0) Va r ( y t ) = Va r ( y 0 ) , but that means 0 = t σ 2 0 = t\sigma^2 0 = t σ 2 , which is not true.
Invertibility ¶ Consider the MA(1) process: y t = ε t + 0.6 ε t − 1 y_t = \varepsilon_t + 0.6\varepsilon_{t-1} y t = ε t + 0.6 ε t − 1 . Is this process invertible? Justify your answer.
Rearrange to get ε t = y t − 0.6 ε t − 1 \varepsilon_t = y_t - 0.6 \varepsilon_{t−1} ε t = y t − 0.6 ε t − 1 , which holds at all points in time, e.g.,
ε t − 1 = y t − 1 − 0.6 ε t − 2 ε t − 2 = y t − 2 − 0.6 ε t − 3 \begin{align*}
\varepsilon_{t-1} &= y_{t-1} - 0.6 \varepsilon_{t−2} \\
\varepsilon_{t-2} &= y_{t-2} - 0.6 \varepsilon_{t−3}
\end{align*} ε t − 1 ε t − 2 = y t − 1 − 0.6 ε t − 2 = y t − 2 − 0.6 ε t − 3 Combine these (i.e., recursively substitute) to get an A R ( ∞ ) AR(\infty) A R ( ∞ ) Model
ε t = y t − 0.6 y t − 1 + 0. 6 2 y t − 2 − 0. 6 3 y t − 3 − ⋯ = y t + ∑ j = 1 ∞ ( − 0.6 ) j y t − j \varepsilon_t = y_t - 0.6 y_{t-1} + 0.6^2 y_{t-2} - 0.6^3 y_{t-3} - \cdots = y_t + \sum_{j=1}^\infty (-0.6)^j y_{t-j} ε t = y t − 0.6 y t − 1 + 0. 6 2 y t − 2 − 0. 6 3 y t − 3 − ⋯ = y t + j = 1 ∑ ∞ ( − 0.6 ) j y t − j or
y t = ∑ j = 1 ∞ ( − 0.6 ) j y t − j + ε t y_t = \sum_{j=1}^\infty (-0.6)^j y_{t-j} + \varepsilon_t y t = j = 1 ∑ ∞ ( − 0.6 ) j y t − j + ε t y t y_t y t exists/is finite since ( − 0.6 ) j (-0.6)^j ( − 0.6 ) j goes to 0 as j j j goes to ∞ \infty ∞ .
Autocovariance ¶ Write down an MA(2) model. What is its first autocovariance, γ ( 1 ) \gamma(1) γ ( 1 ) ?
A (mean 0) MA(2) model is y t = ε t + θ 1 ε t − 1 + θ 2 ε t − 2 y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2} y t = ε t + θ 1 ε t − 1 + θ 2 ε t − 2
γ ( 1 ) ≡ C o v ( y t , y t − 1 ) = E ( y t y t − 1 ) = E [ ( ε t + θ 1 ε t − 1 + θ 2 ε t − 2 ) ( ε t − 1 + θ 1 ε t − 2 + θ 2 ε t − 3 ) ] \gamma(1) \equiv Cov(y_t,y_{t-1}) = E(y_t y_{t-1}) = E[(\varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2})(\varepsilon_{t-1} + \theta_1 \varepsilon_{t-2} + \theta_2 \varepsilon_{t-3})] γ ( 1 ) ≡ C o v ( y t , y t − 1 ) = E ( y t y t − 1 ) = E [( ε t + θ 1 ε t − 1 + θ 2 ε t − 2 ) ( ε t − 1 + θ 1 ε t − 2 + θ 2 ε t − 3 )]
Note that the ε \varepsilon ε ’s are independent over time, then
γ ( 1 ) = E ( θ 1 ε t − 1 2 ) + E ( θ 1 θ 2 ε t − 2 2 ) → γ ( 1 ) = θ 1 σ 2 + θ 1 θ 2 σ 2 → γ ( 1 ) = θ 1 ( 1 + θ 2 ) σ 2 \begin{align*}
\gamma(1) &= E(\theta_1 \varepsilon_{t-1}^2) + E(\theta_1 \theta_2 \varepsilon_{t-2}^2) \\
\rightarrow \gamma(1) &= \theta_1 \sigma^2 + \theta_1 \theta_2 \sigma^2 \\
\rightarrow \gamma(1) &= \theta_1(1 + \theta_2) \sigma^2
\end{align*} γ ( 1 ) → γ ( 1 ) → γ ( 1 ) = E ( θ 1 ε t − 1 2 ) + E ( θ 1 θ 2 ε t − 2 2 ) = θ 1 σ 2 + θ 1 θ 2 σ 2 = θ 1 ( 1 + θ 2 ) σ 2 Causality ¶ Show that y t = 0.7 y t − 1 + ε t y_t = 0.7 y_{t-1} + \varepsilon_t y t = 0.7 y t − 1 + ε t is causal.
The AR(1) model structure is the same at all points in time
y t = 0.7 y t − 1 + ε t y t − 1 = 0.7 y t − 2 + ε t − 1 y t − 2 = 0.7 y t − 3 + ε t − 2 \begin{align*}
y_t &= 0.7 y_{t-1} + \varepsilon_t \\
y_{t-1} &= 0.7 y_{t-2} + \varepsilon_{t-1} \\
y_{t-2} &= 0.7 y_{t-3} + \varepsilon_{t-2}
\end{align*} y t y t − 1 y t − 2 = 0.7 y t − 1 + ε t = 0.7 y t − 2 + ε t − 1 = 0.7 y t − 3 + ε t − 2 Combine, i.e., recursively substitute, to get y t = 0. 7 3 y t − 3 + 0. 7 2 ε t − 2 + 0.7 ε t − 1 + ε t y_t = 0.7^3 y_{t-3} + 0.7^2 \varepsilon_{t-2}+ 0.7 \varepsilon_{t-1} + \varepsilon_t y t = 0. 7 3 y t − 3 + 0. 7 2 ε t − 2 + 0.7 ε t − 1 + ε t
Rinse and repeat to get y t = ∑ j = 0 ∞ 0. 7 j ε t − j y_t = \sum_{j=0}^\infty 0.7^j \varepsilon_{t-j} y t = ∑ j = 0 ∞ 0. 7 j ε t − j (i.e., the MA(∞ \infty ∞ ) Model or Wold Representation)
y t y_t y t exists/is finite since 0. 7 j 0.7^j 0. 7 j goes to 0 as j j j goes to ∞ \infty ∞ .
ARMA(1 , 1 1,1 1 , 1 ) → \rightarrow → A R ( ∞ ) AR(\infty) A R ( ∞ ) ¶ Show that an ARMA(1 , 1 1,1 1 , 1 ) process can be rewritten as an AR(∞ \infty ∞ ). Find the first three AR coefficients.
An ARMA(1 , 1 1,1 1 , 1 ) is y t = ϕ y t − 1 + ε t + θ ε t − 1 y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1} y t = ϕ y t − 1 + ε t + θ ε t − 1 , which holds at all points in time
ε t = y t − ϕ y t − 1 − θ ε t − 1 ε t − 1 = y t − 1 − ϕ y t − 2 − θ ε t − 2 ε t − 2 = y t − 2 − ϕ y t − 3 − θ ε t − 3 ε t − 3 = y t − 3 − ϕ y t − 4 − θ ε t − 4 \begin{align*}
\varepsilon_t &= y_t - \phi y_{t-1} - \theta \varepsilon_{t-1} \\
\varepsilon_{t-1} &= y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2} \\
\varepsilon_{t-2} &= y_{t-2} - \phi y_{t-3} - \theta \varepsilon_{t-3} \\
\varepsilon_{t-3} &= y_{t-3} - \phi y_{t-4} - \theta \varepsilon_{t-4}
\end{align*} ε t ε t − 1 ε t − 2 ε t − 3 = y t − ϕ y t − 1 − θ ε t − 1 = y t − 1 − ϕ y t − 2 − θ ε t − 2 = y t − 2 − ϕ y t − 3 − θ ε t − 3 = y t − 3 − ϕ y t − 4 − θ ε t − 4 Combine, i.e., recursively substitute, to get
ε t = y t − ϕ y t − 1 − θ ( y t − 1 − ϕ y t − 2 − θ ε t − 2 ) → ε t = y t − ( ϕ + θ ) y t − 1 + θ ϕ y t − 2 + θ 2 ε t − 2 → ε t = y t − ( ϕ + θ ) y t − 1 + θ ( ϕ + θ ) y t − 2 − θ 2 ϕ y t − 3 − θ 3 ε t − 3 → ε t = y t − ( ϕ + θ ) y t − 1 + θ ( ϕ + θ ) y t − 2 − θ 2 ( ϕ + θ ) ϕ y t − 3 + θ 3 ϕ y t − 4 + θ 4 ε t − 4 \begin{align*}
\varepsilon_t &= y_t - \phi y_{t-1} - \theta (y_{t-1} - \phi y_{t-2} - \theta \varepsilon_{t-2}) \\
\rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta \phi y_{t-2} + \theta^2 \varepsilon_{t-2} \\
\rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 \phi y_{t-3} - \theta^3 \varepsilon_{t-3} \\
\rightarrow \varepsilon_t &= y_t - (\phi + \theta)y_{t-1} + \theta(\phi + \theta) y_{t-2} - \theta^2 (\phi + \theta) \phi y_{t-3} + \theta^3 \phi y_{t-4} + \theta^4 \varepsilon_{t-4}
\end{align*} ε t → ε t → ε t → ε t = y t − ϕ y t − 1 − θ ( y t − 1 − ϕ y t − 2 − θ ε t − 2 ) = y t − ( ϕ + θ ) y t − 1 + θϕ y t − 2 + θ 2 ε t − 2 = y t − ( ϕ + θ ) y t − 1 + θ ( ϕ + θ ) y t − 2 − θ 2 ϕ y t − 3 − θ 3 ε t − 3 = y t − ( ϕ + θ ) y t − 1 + θ ( ϕ + θ ) y t − 2 − θ 2 ( ϕ + θ ) ϕ y t − 3 + θ 3 ϕ y t − 4 + θ 4 ε t − 4 Thus, the first 3 AR coefficients are − ( ϕ + θ ) - (\phi + \theta) − ( ϕ + θ ) , θ ( ϕ + θ ) \theta(\phi + \theta) θ ( ϕ + θ ) , and − θ 2 ( ϕ + θ ) - \theta^2(\phi + \theta) − θ 2 ( ϕ + θ )
Using intuition to convert the ARMA(1 , 1 1,1 1 , 1 ) to an AR(∞ \infty ∞ ) yields
ε t = y t − ( ϕ + θ ) ∑ j = 0 ∞ ( − θ ) j y t − j − 1 \varepsilon_t = y_t - (\phi + \theta) \sum_{j=0}^\infty (-\theta)^j y_{t-j-1} ε t = y t − ( ϕ + θ ) j = 0 ∑ ∞ ( − θ ) j y t − j − 1 We can verify that this expression correctly reproduces the first 3 AR coefficients above.
Variance ¶ Find the variance of an ARMA(1,1) process.
An ARMA(1 , 1 1,1 1 , 1 ) is y t = ϕ y t − 1 + ε t + θ ε t − 1 y_t = \phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1} y t = ϕ y t − 1 + ε t + θ ε t − 1 . Taking V a r Var Va r directly would be tricky because of the dependent terms.
γ ( 0 ) ≡ E [ ( ϕ y t − 1 + ε t + θ ε t − 1 ) ( ϕ y t − 1 + ε t + θ ε t − 1 ) ] \gamma(0) \equiv E[(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})(\phi y_{t-1} + \varepsilon_t + \theta \varepsilon_{t-1})] γ ( 0 ) ≡ E [( ϕ y t − 1 + ε t + θ ε t − 1 ) ( ϕ y t − 1 + ε t + θ ε t − 1 )]
= E ( ϕ 2 y t − 1 2 ) + 2 ϕ θ E [ y t − 1 ε t − 1 ] + E ( ε t 2 ) + θ 2 E ( ε t − 1 2 ) = ϕ 2 γ ( 0 ) + 2 ϕ θ E [ y t − 1 ε t − 1 ] + ( 1 + θ 2 ) σ 2 \begin{align*}
&= E(\phi^2 y_{t-1}^2) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + E(\varepsilon_t^2) + \theta^2 E(\varepsilon_{t-1}^2) \\
&= \phi^2 \gamma(0) + 2\phi\theta E[y_{t-1} \varepsilon_{t-1}] + (1 + \theta^2)\sigma^2
\end{align*} = E ( ϕ 2 y t − 1 2 ) + 2 ϕθE [ y t − 1 ε t − 1 ] + E ( ε t 2 ) + θ 2 E ( ε t − 1 2 ) = ϕ 2 γ ( 0 ) + 2 ϕθE [ y t − 1 ε t − 1 ] + ( 1 + θ 2 ) σ 2 Note that E [ y t − 1 ε t − 1 ] = σ 2 E[y_{t-1} \varepsilon_{t-1}] = \sigma^2 E [ y t − 1 ε t − 1 ] = σ 2 , thus
γ ( 0 ) = ϕ 2 γ ( 0 ) + 2 ϕ θ σ 2 + ( 1 + θ 2 ) σ 2 → γ ( 0 ) ( 1 − ϕ 2 ) = ( 1 + 2 ϕ θ + θ 2 ) σ 2 → γ ( 0 ) = ( 1 + 2 ϕ θ + θ 2 ) σ 2 / ( 1 − ϕ 2 ) \begin{align*}
\gamma(0) &= \phi^2 \gamma(0) + 2\phi\theta \sigma^2 + (1 + \theta^2)\sigma^2 \\
\rightarrow \gamma(0)(1 -\phi^2) &= (1 + 2\phi\theta + \theta^2)\sigma^2 \\
\rightarrow \gamma(0) &= (1 + 2\phi\theta + \theta^2)\sigma^2 /(1 -\phi^2)
\end{align*} γ ( 0 ) → γ ( 0 ) ( 1 − ϕ 2 ) → γ ( 0 ) = ϕ 2 γ ( 0 ) + 2 ϕθ σ 2 + ( 1 + θ 2 ) σ 2 = ( 1 + 2 ϕθ + θ 2 ) σ 2 = ( 1 + 2 ϕθ + θ 2 ) σ 2 / ( 1 − ϕ 2 ) Differencing ¶ Suppose you have the ARIMA(1 , 1 , 0 1,1,0 1 , 1 , 0 ) model Δ y t = 0.5 Δ y t − 1 + ε t \Delta y_t = 0.5 \Delta y_{t-1} + \varepsilon_t Δ y t = 0.5Δ y t − 1 + ε t . Rewrite it in its original (non-differenced) form.
Δ y t = y t − y t − 1 \Delta y_t = y_t - y_{t-1} Δ y t = y t − y t − 1 and Δ y t − 1 = y t − 1 − y t − 2 \Delta y_{t-1} = y_{t-1} - y_{t-2} Δ y t − 1 = y t − 1 − y t − 2
Substituting that in we get
y t − y t − 1 = 0.5 ( y t − 1 − y t − 2 ) + ε t y_t - y_{t-1} = 0.5 (y_{t-1} - y_{t-2}) + \varepsilon_t y t − y t − 1 = 0.5 ( y t − 1 − y t − 2 ) + ε t Rearranging yields an AR(2):
y t = 1.5 y t − 1 − 0.5 y t − 2 + ε t y_t = 1.5 y_{t-1} - 0.5 y_{t-2} + \varepsilon_t y t = 1.5 y t − 1 − 0.5 y t − 2 + ε t Integration ¶ Explain how you would determine the order of integration d d d for a time series.
Visually inspect the raw data for trends and seasonality. If there are either time trends or seasonality, remove them by differencing (first difference).
Visually inspect the differenced data for a constant mean, variance, and covariance. Plot the ACF. If it still appears the data is non-stationary, take another difference.
Repeat step 2. Count the number of times the data was differenced to achieve stationarity, but differencing more than twice is unusual.