Chapter 51
STRUCTURAL ESTIMATION OF MARKOV
DECISION PROCESSES*
JOHN RUST
University of Wisconsin
Contents
1. Introduction
2. Solving MDP’s via dynamic programming: A brief review
2.1. Finite-horizon dynamic programming and the optimality of Markovian
decision rules
2.2. Infinite-horizon dynamic programming and Bellman’s equation
2.3. Bellman’s equation, contraction mappings and optimality
2.4. A geometric series representation for MDP’s
2.5. Overview of solution methods
3. Econometric methods for discrete decision processes
3.1. Alternative models of the “error term”
3.2. Maximum likelihood estimation of DDP’s
3.3. Alternative estimation methods: Finite-horizon DDP problems
3.4. Alternative estimation methods: Infinite-horizon DDP’s
3.5. The identification problem
4. Empirical applications
4.1. Optimal replacement of bus engines
4.2. Optimal retirement from a firm
References