This course will cover the fundamentals of dynamic programming which is a method for solving complex problems by breaking them down into simpler subproblems. Topics include deterministic and stochastic formulation of the principle of optimality, value and policy iteration, introduction to finite state Markov chains, partial state information problems, stochastic shortest path problems, infinite horizon problems, and introduction to approximate dynamic programming. Applications include inventory control, finance, routing, and sequential hypothesis testing.
This course provides an overview of game theory with a special emphasis on its application to "multiagent systems". Game theory focuses on the study of systems that are comprised of a collection of interacting and possibly competing decision making entities. Examples will be drawn from engineered, economics, and social models, including multivehicle robotics, data networks, sensor networks, and electronic commerce.
This course provides an overview of topics in game theory relevant to the design and analysis of the emerging socio-technical systems, which are systems comprised of both engineered and human decision-making entities. A representative list of topics include mechanism design, algorithmic game theory, strategic learning and its limits, among others.