|Decisions are often made sequentially and in environments where their outcomes are subject to stochastic variability. Dynamic Programming (DP) provides a powerful tool for obtaining structural insight about, as well computing prescriptions for, such decisions. The method finds wide application in operations management, marketing, economics, and finance among other fields. This course is intended to provide a rigorous introduction to the method with an emphasis on applications from these fields. When appropriate, finite or countable state Markovian settings will be used to obtain theoretical results with a minimum of technical fuss. Implementation and computational issues will be discussed.|
|The following textbook will be used extensively as the primary reading for the course: Dynamic Programming and Optimal Control, Vols. 1 (3rd Edition) and 2 (4th Edition) by Dimitri P. Bertsekas. In addition a few research papers will be distributed in class.|
|The course is for Ph.D. students only. Linear and/or nonlinear optimization, Markov chain theory.
Description and/or course criteria last updated: 01/28/2013
|Course Conditions and Course Related Items:|