Abstract
We describe a two-step algorithm for estimating dynamic games under the as- sumption that behavior is consistent with Markov perfect equilibrium. In the first step, the policy functions and the law of motion for the state variables are esti- mated. In the second step, the remaining structural parameters are estimated using the optimality conditions for equilibrium. The second step estimator is a simple simulated minimum distance estimator. The algorithm applies to a broad class of models, including industry competition models with both discrete and continuous controls such as the Ericson and Pakes (1995) model. We test the algorithm on a class of dynamic discrete choice models with normally distributed errors, and a class of dynamic oligopoly models similar to that of Pakes and McGuire (1994).