Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

RBC Model

by Professor Throckmorton
for Time Series Econometrics
W&M ECON 408/PUBP 616
Slides

Summary

  • Introduce a nonlinear theoretical model of business cycles.

  • Get the equilibrium conditions and linear the system of equations.

  • Solve the linear system, i.e., express (observable) endogenous variables as functions of (hidden) state variables.

  • Estimate the state vector with the Kalman filter.

  • The following example is a simplified exposition of Chad Fulton’s RBC example in Python here: https://www.chadfulton.com/topics/estimating_rbc.html

The Model

  • A social planner chooses how much to consume, work, and invest, {ct+j,nt+j,it+j}j=0\{c_{t+j},n_{t+j},i_{t+j}\}_{j=0}^\infty, to maximize present value lifetime utility

    Etj=0βj[log(ct+j)ψnt+j]E_t \sum_{j=0}^\infty \beta^j [\log(c_{t+j}) - \psi n_{t+j}]

    subject to:

    • resource constraint: yt=ct+ity_t = c_t+i_t

    • capital accumulation: kt=(1δ)kt1+itk_t = (1−\delta)k_{t-1}+i_t

    • production function: yt=ztkt1αnt1αy_t = z_t k_{t-1}^\alpha n_t^{1 - \alpha}

    • Total factor productivity (TFP):
      logzt=(1ρ)zˉ+ρlogzt1+ηt\log z_t = (1-\rho)\bar{z} + \rho \log z_{t-1} + \eta_t where ηtN(0,σ2)\eta_t \sim N(0, \sigma^2)

  • EtE_t is an expectation operator conditional data through time tt, i.e., it’s a forecast because TFP is random/stochastic

Parameters

  • This model has the following parameters

ParameterDescriptionParameterDescription
β\betaDiscount factorρ\rhoTFP shock persistence
ψ\psiMarginal disutility of workσ2\sigma^2TFP shock variance
δ\deltaCapital depreciation rateα\alphaCapital-share of output
  • Example parameterization

ParameterValueParameterValue
β=\beta = 0.95ρ=\rho = 0.85
ψ=\psi = 3σ2=\sigma^2 = 0.04
δ=\delta = 0.025α=\alpha = 0.36
  • The ultimate goal is to estimate (some of) these parameters.

Equilibrium

  • The RBC model is a constrained optimization problem which can be written as a Lagrangian with the following first order conditions (i.e., equilibrium conditions).

  • In equilibrium, the marginal benefit (i.e., utility) of a unit of consumption today equals the marginal cost (i.e., lost future utility) forgoing the future return on a unit of investment:

    1ct=Et{β1ct+1[rt+1+(1δ)]}\frac{1}{c_t} = E_t \left \{\beta \frac{1}{c_{t+1}} \left [ r_{t+1} + (1 - \delta) \right ] \right \}

    where 1/ct1/c_t is the marginal utility of consumption and rt=αyt/kt1r_t = \alpha y_t/k_{t-1} is the marginal product of capital.

  • In equilibrium, the marginal benefit (i.e., utility) of work equals the marginal cost (i.e., disutility) of work.

    ψ=1ctwt\psi = \frac{1}{c_t} w_t

    where wt=(1α)yt/ntw_t = (1 - \alpha) y_t/n_t is the marginal product of work.

  • Including the constraints completes the nonlinear equilibrium system.

Linear System

  • The linearized equilibrium system of equations is

    γc^t=γEtc^t+1+βrEtr^t+1r^t=y^tk^t1c^t=w^tw^t=y^tn^tyy^t=cc^t+ii^tkk^t=(1δ)kk^t1+ii^ty^t=z^t+αk^t1+(1α)n^tz^t=ρz^t1+σηt\begin{gather*} - \gamma \hat{c}_t = - \gamma E_t\hat{c}_{t+1} + \beta rE_t\hat{r}_{t+1} \\ \hat{r}_t = \hat{y}_t-\hat{k}_{t-1}\\ \hat{c}_t = \hat{w}_t \\ \hat{w}_t = \hat{y}_t - \hat{n}_t\\ y \hat{y}_t = c\hat{c}_t + i\hat{i}_t \\ k\hat{k}_t = (1-\delta)k\hat{k}_{t-1} + i\hat{i}_t \\ \hat{y}_t = \hat{z}_t + \alpha\hat{k}_{t-1}+(1-\alpha)\hat{n}_t\\ \hat{z}_t = \rho \hat{z}_{t-1} + \sigma \eta_t \end{gather*}
  • The state variables for this system are (kt1,zt1)(k_{t-1}, z_{t-1}), i.e., the social planner needs to know those to make decisions at tt.

  • There are 8 variables, {c,r,y,k,w,n,i,z}\{c,r,y,k,w,n,i,z\}, which (importantly!) equals the number of equations.

  • This is a linear model with (rational) expectations/forecasts, i.e., the social planner forecasts with full understanding of the structure of the model, the initial conditions/state, (kt1,zt1)(k_{t-1}, z_{t-1}), and the distribution of TFP.

  • A solution to the model expresses the endogenous variables (yt,ct,it,nt)(y_t, c_t, i_t, n_t) as functions of the state variables (kt1,zt1)(k_{t-1}, z_{t-1}).

  • We can obtain the solution with methods in Blanchard-Kahn (1980) or Chris Sims’ gensys.

  • The solution to the linear system is represented in state-space form.

Solution

  • Observation equation(s):

[y^tn^tc^t]=[11ααϕck1α1ααϕcz11αϕck1α[1ϕcz]ϕckϕcz][k^tz^t]+[ε1,tε2,t]\begin{gather*} \begin{bmatrix} \hat y_t \\ \hat n_t \\ \hat c_t \\ \end{bmatrix} = \begin{bmatrix} 1 - \frac{1-\alpha}{\alpha} \phi_{ck} & \frac{1}{\alpha} - \frac{1-\alpha}{\alpha} \phi_{cz} \\ 1 - \frac{1}{\alpha} \phi_{ck} & \frac{1}{\alpha} \left [ 1 - \phi_{cz} \right ] \\ \phi_{ck} & \phi_{cz} \\ \end{bmatrix} \begin{bmatrix} \hat k_{t} \\ \hat z_{t} \end{bmatrix} + \begin{bmatrix} \varepsilon_{1,t} \\ \varepsilon_{2,t} \end{bmatrix} \end{gather*}
  • State equation(s):

[k^tz^t]=[TkkTkz0ρ][k^t1z^t1]+[01]ηt\begin{gather*} \begin{bmatrix} \hat k_t \\ \hat z_t \\ \end{bmatrix} = \begin{bmatrix} T_{kk} & T_{kz}\\ 0 & \rho \\ \end{bmatrix} \begin{bmatrix} \hat k_{t-1} \\ \hat z_{t-1} \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \end{bmatrix} \eta_t \end{gather*}
  • Given the above parameterization, the solution is

ParameterValue
ϕck\phi_{ck}0.534
ϕcz\phi_{cz}0.487
TkkT_{kk}0.884
TkzT_{kz}0.319
α\alpha0.36

Business Cycle Data

  • In the RBC model, x^t100×(xt/xˉ1)\hat{x}_t \equiv 100\times(x_t/\bar{x} -1) where xˉ\bar{x} is the (constant) long-run equilibrium.

  • Q: What is the equivalent object in time series data with stochastic or time trends?

  • The Hodrick-Prescott (HP) filter is one (of many) mathematical tool(s) used to decompose a time series xtx_t into:

    • τt\tau_t: A smooth trend component capturing both time and stochastic trends

    • ctc_t: A cyclical component, i.e., the object of interest for business cycle economists

  • In other words, we can express the data as percent changes from a “long run equilbrium”, x^t=100(xt/τt1)\hat{x}_t = 100(x_t/\tau_t - 1), i.e., percent deviations from trend.

  • In business cycle research, other popular choices are the band-pass filter and Hamilton filter.

  • James Hamilton wrote a critque of the HP filter, and then created “a better alternative,” which now has over 2000 citations as of Spring 2025.

  • The HP filter finds the trend τt\tau_t by solving a minimization problem:

    minτtt=1T(ytτt)2+λt=2T1[(τt+1τt)(τtτt1)]2\min_{\tau_t} \sum_{t=1}^T (y_t - \tau_t)^2 + \lambda \sum_{t=2}^{T-1} \left[(\tau_{t+1} - \tau_t) - (\tau_t - \tau_{t-1})\right]^2
    • First term: (ytτt)2(y_t - \tau_t)^2 makes the trend close to the actual data.

    • Second term: [(τt+1τt)(τtτt1)]2\left[(\tau_{t+1} - \tau_t) - (\tau_t - \tau_{t-1})\right]^2 penalizes sharp changes in the slope of the trend, enforcing smoothness.

  • λ\lambda influences the smoothness of the resulting trend

    • Small λ\lambda → Trend follows the data closely (e.g., wiggly trend with “sharp” changes).

    • Large λ\lambda → Trend is very smooth (e.g., nearly a straight line).

    • Common Choices for λ\lambda

      • λ=1600\lambda = 1600 for quarterly data

      • λ=14400\lambda = 14400 for monthly data

      • λ=100\lambda = 100 for annual data

Reading Data

# Libraries
from fredapi import Fred
import pandas as pd
# Setup acccess to FRED
fred_api_key = pd.read_csv('fred_api_key.txt', header=None).iloc[0,0]
fred = Fred(api_key=fred_api_key)
# Series to get
series = ['GDP','PRS85006023','PCE','GDPDEF','USREC']
rename = ['gdp','hours','cons','price','rec']
# Get and append data to list
dl = []
for idx, string in enumerate(series):
    var = fred.get_series(string).to_frame(name=rename[idx])
    dl.append(var)
    print(var.head(2)); print(var.tail(2))
            gdp
1946-01-01  NaN
1946-04-01  NaN
                  gdp
2025-01-01  29962.047
2025-04-01  30331.117
              hours
1947-01-01  117.931
1947-04-01  117.425
             hours
2024-10-01  97.923
2025-01-01  97.888
             cons
1959-01-01  306.1
1959-02-01  309.6
               cons
2025-05-01  20615.2
2025-06-01  20685.2
             price
1947-01-01  11.141
1947-04-01  11.299
              price
2025-01-01  127.429
2025-04-01  128.059
            rec
1854-12-01  1.0
1855-01-01  0.0
            rec
2025-05-01  0.0
2025-06-01  0.0
# Concatenate data to create data frame (time-series table)
rawm = pd.concat(dl, axis=1).sort_index()
# Resample/reindex to quarterly frequency
raw = rawm.resample('QE').last().dropna()
# real GDP
raw['rgdp'] = raw['gdp']/raw['price']
# real Consumption
raw['rcons'] = raw['cons']/raw['price']
# Display dataframe
display(raw)
Loading...

HP Filter

The vector of observable endogenous variables is [y^t,n^t,c^t][\hat y_t, \hat n_t, \hat c_t], where x^t=100(xt/τt1)\hat{x}_t = 100(x_t/\tau_t - 1) and τt\tau_t is the smooth trend from the HP filter.

# Hodrick-Prescott filter
from statsmodels.tsa.filters.hp_filter import hpfilter
# Smothing parameter for quarterly data
lam = 1600
# Raw data to detrend
vars = ['rgdp','hours','rcons']
nvar = len(vars)
# Detrend raw data
data = pd.DataFrame()
for var in vars:
    cycle, trend = hpfilter(raw[var],lam)
    data[var] = 100*(raw[var]/trend - 1)
C:\Users\rabbitrun\miniconda3\Lib\site-packages\statsmodels\tsa\filters\hp_filter.py:100: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format
  trend = spsolve(I+lamb*K.T.dot(K), x, use_umfpack=use_umfpack)
C:\Users\rabbitrun\miniconda3\Lib\site-packages\statsmodels\tsa\filters\hp_filter.py:100: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format
  trend = spsolve(I+lamb*K.T.dot(K), x, use_umfpack=use_umfpack)
C:\Users\rabbitrun\miniconda3\Lib\site-packages\statsmodels\tsa\filters\hp_filter.py:100: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format
  trend = spsolve(I+lamb*K.T.dot(K), x, use_umfpack=use_umfpack)
ax = data.plot(); ax.grid(); ax.set_title('Observables (% Dev. from Trend)')
ax.legend([r'Real GDP ($\hat y_t$)',r'Hours ($\hat n_t$)',r'Real PCE ($\hat c_t$)'],ncol=3);
fig = ax.get_figure(); fig.set_size_inches(6.5,2.5)
# Shade NBER recessions
usrec = rawm['3/31/1959':'12/31/2024']['rec']
ax.fill_between(usrec.index, -8.5, 5, where=usrec.values, color='k', alpha=0.2)
ax.autoscale(tight=True)
<Figure size 650x250 with 1 Axes>

RBC Model in Python

Set the parameters in the state space model, which represents the solution to the linearized RBC model.

# Define the state space model
#  Parameters
phick = 0.534; phicz = 0.487
Tkk = 0.884; Tkz = 0.319
alpha = 0.36; rho = 0.85; sigma2 = 0.4;

Simplified Observation (Measurement) Equation: yt=Zxt+εt\mathbf{y}_t = \mathbf{\mathbf{Z}} \mathbf{x}_t + \boldsymbol{\boldsymbol{\varepsilon}}_t where εtN(0,H)\boldsymbol{\varepsilon}_t \sim N(0, \mathbf{H})

[y^tn^tc^t]=[11ααϕck1α1ααϕcz11αϕck1α[1ϕcz]ϕckϕcz][k^tz^t]+[ε1,tε2,t]\begin{gather*} \small \begin{bmatrix} \hat y_t \\ \hat n_t \\ \hat c_t \\ \end{bmatrix} = \begin{bmatrix} 1 - \frac{1-\alpha}{\alpha} \phi_{ck} & \frac{1}{\alpha} - \frac{1-\alpha}{\alpha} \phi_{cz} \\ 1 - \frac{1}{\alpha} \phi_{ck} & \frac{1}{\alpha} \left [ 1 - \phi_{cz} \right ] \\ \phi_{ck} & \phi_{cz} \\ \end{bmatrix} \begin{bmatrix} \hat k_{t} \\ \hat z_{t} \end{bmatrix} + \begin{bmatrix} \varepsilon_{1,t} \\ \varepsilon_{2,t} \end{bmatrix} \end{gather*}
# Scientific computing
import numpy as np
# Observation equation
Z11 = 1 - (1-alpha)*phick/alpha
Z12 = 1/alpha - (1-alpha)*phicz/alpha
Z21 = 1 - phick/alpha
Z22 = (1-phicz)/alpha
Z31 = phick
Z32 = phicz
Z = np.array([[Z11,Z12],[Z21,Z22],[Z31,Z32]])
H = 1e-8*np.eye(nvar)
display(Z)
display(H)
array([[ 0.05066667, 1.912 ], [-0.48333333, 1.425 ], [ 0.534 , 0.487 ]])
array([[1.e-08, 0.e+00, 0.e+00], [0.e+00, 1.e-08, 0.e+00], [0.e+00, 0.e+00, 1.e-08]])

Simplified State (Transition) Equation: xt=Fxt1+ηt\mathbf{x}_t = \mathbf{F} \mathbf{x}_{t-1} + \boldsymbol{\boldsymbol{\eta}}_t where ηtN(0,Q)\boldsymbol{\eta}_t \sim N(0, \mathbf{Q})

[k^tz^t]=[TkkTkz0ρ][k^t1z^t1]+ηt\begin{gather*} \begin{bmatrix} \hat k_t \\ \hat z_t \\ \end{bmatrix} = \begin{bmatrix} T_{kk} & T_{kz}\\ 0 & \rho \\ \end{bmatrix} \begin{bmatrix} \hat k_{t-1} \\ \hat z_{t-1} \end{bmatrix} + \boldsymbol{\boldsymbol{\eta}}_t \end{gather*}
# Transition equation
F = [[Tkk,Tkz],[0,rho]]
Q = [[0,0],[0,sigma2]]
display(F)
display(Q)
[[0.884, 0.319], [0, 0.85]]
[[0, 0], [0, 0.4]]

Kalman Filter

from statsmodels.tsa.statespace.kalman_filter import KalmanFilter
# Initialize the Kalman filter
kf = KalmanFilter(k_endog=nvar, k_states=2)
kf.design = Z
kf.obs_cov = H
kf.transition = F
kf.state_cov = Q
#   Initial state vector mean and covariance
kf.initialize_known(constant=np.zeros(2), stationary_cov=Q)
# Extract data from Dataframe as NumPy array
observations = np.array(data)
# Convert data to match the required type
observations = np.asarray(observations, order="C", dtype=np.float64)
# Bind the data to the KalmanFilter object
kf.bind(observations)

# Extract filtered state estimates
res = kf.filter()
filtered_state_means = res.filtered_state
filtered_state_covariances = res.filtered_state_cov
display(filtered_state_means.shape)
display(filtered_state_covariances.shape)
(2, 265)
(2, 2, 265)

Filtered (Estimated) States

xhat = pd.DataFrame(filtered_state_means.T,index=data.index)
ax = xhat.plot(); ax.grid(); ax.set_title('Filtered States (% Dev. from Trend)')
fig = ax.get_figure(); fig.set_size_inches(6.5,2.5)
ax.legend([r'Capital ($\hat k_t$)',r'TFP ($\hat z_t$)'],ncol=2);
# Shade NBER recessions
ax.fill_between(usrec.index, -2, 2, where=usrec.values, color='k', alpha=0.2)
ax.autoscale(tight=True)
<Figure size 650x250 with 1 Axes>
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2,figsize=(6.5,4)); fig.set_size_inches(8.5,4.75)
axs[0].plot(data); axs[0].grid(); axs[0].set_title('Observables (% Dev. from Trend)')
axs[0].legend([r'Real GDP ($\hat y_t$)',r'Hours ($\hat n_t$)',r'Real PCE ($\hat c_t$)'],ncol=3);
axs[0].fill_between(usrec.index, -8.5, 5, where=usrec.values, color='k', alpha=0.2)
axs[1].plot(xhat); axs[1].grid(); axs[1].set_title('States (% Dev. from Trend)')
axs[1].legend([r'Capital ($\hat k_t$)',r'TFP ($\hat z_t$)'],ncol=2);
axs[1].fill_between(usrec.index, -2, 2, where=usrec.values, color='k', alpha=0.2)
axs[0].label_outer(); axs[0].autoscale(tight=True); axs[1].autoscale(tight=True)
<Figure size 850x475 with 2 Axes>

Parameter Estimation

loglike = res.llf
print(f"Log-likelihood: {loglike}")
Log-likelihood: -35931612459.28382
  • Kalman filter produces log likelihood for this parameterization

  • We could do a better job calibrating some parameters like the discount factor (β\beta) and the labor preference parameter (ψ\psi)

  • Then we could search the parameter space for TFP process parameters, {ρ,σ2}\{\rho,\sigma^2\}, that maximize the log likelihood.

  • This model is very simple.

    • We could change the utility function to add a risk aversion parameter.

    • We could add investment adjustment costs and variable capital utilization.

    • We could add demand shocks from government spending or monetary policy.