We propose a new methodology for structural estimation of dynamic discrete choice models. We combine the Dynamic Programming (DP) solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though per solution-estimation iteration, the number of grid points on the state variable is small, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the "Curse of Dimensionality". We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.
QED Working Paper Number
1118
Bayesian Estimation
Dynamic Discrete Choice Model
Dynamic Programming
Markov Chain Monte Carlo
Bayesian Dynamic Programming Estimation
Download [PDF]
(636.04 KB)