Course description
The course will provide an introduction to some dynamic optimization methods in continuous and discrete time (optimal control problems, Hamilton-Jacobi-Bellman equations and Bellman equations) which are widely employed in the economic literature, both in microeconomics (e.g. intertemporal consumer theory, life-cycle theories) and in macroeconomics (e.g. growth theory).
Technicalities will be mentioned as far as they are necessary for the correct solution of the problem.
The second part of the course will be used to learn to solve a dynamic optimization problem using Mathematica, a software which helps solving and representing equations, functions and other mathematical issues.
Topics
Prerequisites
No prior knowledge is required, but prior exposure to optimal control methods is helpful. For the lab with Mathematica, bring your laptop (and install Mathematica: trial version here).
Learning outcomes
The course is aimed at allowing Ph.D. students to model and solve intertemporal problems which can be relevant for their research projects and that will be treated more in depth in other courses of the Ph.D. program in Economics
Syllabus
A friendly introduction to optimal control theory is in Chiang, A. (1992): Elements of Dynamic Optimization. McGraw-Hill, New York: New York
A harder-to-digest handbook on dynamic programming is N. L. Stokey, R. E. Lucas, Recursive Methods in Economic Dynamics. Harvard University Press, 1989.
We will replicate the results contained in