book cover

 
Summary
 

 

is aimed at students interested in using game theory as a design methodology for solving problems in engineering and computer science. This book shows that such design challenges can be analyzed through game theoretical perspectives that help to pinpoint each problem's essence: Who are the players? What are their goals? Will the solution to "the game" solve the original design problem? Using the fundamentals of game theory, this book explores these issues and more.

Hardcover: 248 pages
Publisher: Princeton University Press (June 13, 2017)
Language: English

ISBN-10: 0691175217
ISBN-13: 978-0691175218

Published by Princeton University Press
Publisher's book website

Also available at Amazon.com

 


quick links

Summary
To the uninitiated, Game Theory conjures images of developing computer programs to solve board games like chess or card games like poker and, in fact, the tools behind this discipline can indeed be used for such purposes. However, game theory goes much beyond such functions and provides a framework for reasoning about problems in which multiple "players" must make decisions, with the understanding that the results of these decisions affect and are affected by the decisions of the other players. Board and card games are obvious examples of such problems, but game theory is applicable to much more "serious" domains.

The first question one typically asks when faced with a multiplayer problem is probably "How should I play?" Immediately followed by the question "How will my opponent play?" The way out of this fundamental chicken and egg problem faced by game theorists is that one needs to find "equilibrium" strategies that somehow simultaneously satisfy all the players. Game theory thus provides a framework to predict the behavior of rational players, either in board games or in economics.

Once the notion of "equilibrium" is understood, one starts to wonder whether it is possible to find such equilibria for all games. It is not necessary to go far to find trouble: equilibria do not always exist and sometimes there is more than one equilibrium. How is one then supposed to predict the behavior of rational players? And what if a single equilibrium exists, but we do not like the predicted behavior of the players? These questions lead to some of the most interesting problems in game theory: "How to design games that predictably lead to desirable actions by the players?" These questions are of key interest to economists, social scientists, and engineers.

Modern game theory was born in the 1930s, mostly propelled by the work of John von Neumann, and further refined by Morgenstern, Kuhn, Nash, Shapley and others. Throughout most of the 1940s and 1950s, economics was its main application, eventually leading to the 1994 Nobel prize in Economic Science awarded to John Nash, John C. Harsanyi, and Reinhard Selten for their contributions to game theory. It was not until the 1970s that it started to have a significant impact on engineering; and in the late 1980s it led to significant breakthroughs in control theory and robust filtering. Currently, game theory pervades all areas of engineering.

Problems related to the development of pricing strategies for technological products or services have always interested engineers, but the use of game theory in technology design is a more recent development that arose from the intrinsic limitations of classical optimization-based designs. In optimization, one attempts to find values for parameters that minimize suitably defined criteria (such as monetary cost, energy consumption, heat generated, etc.) However, in most engineering applications there is always some uncertainty as to how the selected parameters will affect the final objective. One can then pose the problem of how to make sure that the selection will lead to acceptable performance, even in the presence of some degree of uncertainty---the unforgiving player that, behind the scenes, conspires to wreck engineering designs. This question is at the heart of many games that appear in engineering applications. In fact, game theory provides a basic mathematical framework for robust design in engineering.

A feature common to many engineering applications of game theory is that the problem does not start as a "game." In fact, the most interesting design challenge is often to construct a game that captures the essence of the problem: Who are the players? What are their goals? Will the "solution" to the game solve the original design problem? These are questions that we will encounter in this book.

Contents

The first two lectures introduce the basic elements of a mathematical game through a set of simple examples. Notions of player, game rules and objectives, information structure, player rationality, cooperative versus noncooperative solutions, and Nash equilibrium are introduced to provide the reader with an overview of the main issues to come. Subsequent lecture's systematically return to all of these topics.

Lectures 3-8 are focused on zero-sum games. Starting with matrix games in lectures 3-6, we introduce the fundamental concept of saddle-point equilibrium and explore its key properties, both for pure and mixed policies. The Minimax Theorem and computational issues are also covered. The information structure of a game is first treated in lectures 7-8 with the introduction of (zero-sum) games in extensive form. Complex information structures lead to the distinction between two types of stochastic policies: mixed and behavioral policies. In these lectures we also introduce a general recursive method that will evolve in later lectures into Dynamic Programming.

Non-zero sum games are treated in 9-13. We introduce the concept of Nash equilibrium in a general setting and discuss its numerical computation for two-player bimatrix games. Lectures 12-13 are focused exclusively on the rich class of potential games. In these lectures we discuss several classical potential games, with some emphasis on the design of potential games to solve distributed optimization problems.

The last set of lectures 14-18 is devoted to the solution of dynamic games. We start by reviewing Dynamic Programming for (single-player) optimization in lectures 15-16 and use it as the starting point to construct saddle-point policies for zero-sum games in lectures 17-18. We treat both discrete- and continuous-time games, with a fixed or a variable termination time.

Learning and Teaching Using This Textbook

This book was purposely designed as a textbook, and consequently, the main emphasis is on presenting material in a fashion that makes it interesting and easy for students to understand.

In writing this manuscript, there was a conscious effort to reduce verbosity. This is not to say that there was no attempt to motivate the concepts or discuss their significance (to the contrary), but the amount of text was kept to a minimum. Typically, discussion, remarks, and side comments are relegated to marginal notes so that the reader can easily follow the material presented without distraction and yet enjoy the benefit of comments on the notation and terminology, or be made aware that a there is a related MATLAB command.

At the University of California at Santa Barbara, I teach the material in these lectures in one quarter with about 36 hours of class time. The class I teach is primarily aimed at first-year graduate students in the College of Engineering, but these notes were written so that they can also serve as the primary textbook for a senior-level undergraduate class, as most of the lectures only require familiarity with linear algebra and probabilities at an undergraduate level. Two lectures (16 and 18) also require some familiarity with differential equations, but they could be skipped or presented as optional advanced material if students do not have the appropriate prerequisites.

I have tailored the organization of the textbook to simplify the teaching and learning of the material. In particular, the sequence of the chapters emphasizes continuity, with each chapter motivated by and in logical sequence with the preceding ones. I always avoid introducing a concept in one chapter and using it again only many chapters later. It has been my experience that even if this may be economical in terms of space, it is pedagogically counterproductive. The chapters are balanced in length so that on average each can be covered in roughly 2 hours of lecture time. Not only does this greatly aid the instructor's planning, but it makes it easier for the students to review the materials taught in class.

The book includes exercises that should be solved as the reader progresses through the material. Some of these exercises clarify issues raised in the body of the text and the reader is generally pointed to such exercises in marginal notes. Other exercises are aimed at consolidating the knowledge acquired, by asking the reader to apply algorithms or approaches previously discussed. The book includes detailed solutions for all the exercises that appear in the sections titled "Practice Exercises," but it does not include solutions to those in the sections titled "Additional Exercises."

MATLAB

Computational tools such as the MATLAB software environment offer a significant step forward in teaching this class because they allow students to solve numerical problems without being subject to a detailed treatment of numerical methods. By systematically annotating the theoretical developments with marginal notes that discuss the relevant commands available in MATLAB, this textbook helps students learn to use these tools. We also provide MATLAB functions that implement some of the key algorithms discussed.

The commands discussed in the "MATLAB Hints" assume that the reader has version R2015b of MATLAB and the Optimization Toolbox. However, essentially all the commands used have been fairly stable for several versions, so they are likely to work with previous and subsequent versions for several years to come. Lecture 6 assumes that the reader has installed CVX, which is a MATLAB package for Disciplined Convex Programming, distributed under the GNU General Public License 2.0.

 

Contents

 

Table of Contents:[pdf]

Index: [pdf]

Chapter 1: [pdf]

 

Corrections

 

The errata is available here: [pdf

Comments and information about typos are most welcome.

 

Exercises

 

Solutions

Instructors: A supplementary Solutions Manual is available for this book. It is restricted to teachers using the text in courses.

Information on how to obtain a copy is available from Princeton press solution manuals for Professors.

 

MATLAB scripts

Scripts related to Lecture 13 (Classes of Potential Games)

improvementPath.m fictitiousPlay.m

Scripts related to Lecture 17 (State-feedback Zero-Sum Dynamic Games)

ttt_addO.m ttt_addX.m ttt_states.m ttt_value.m ttt_play.m ttt_drawboard.m

Please email me if there is a particular script that appears in the book that you'd like to have. 

 

Miscellaneous

Bibtex entry

@Book{Hespanha09,
  author = {Jo{\~a}o P. Hespanha},
  title = {Noncooperative Game Theory: An introduction for Engineers and Computer Scientists},
  publisher = {Princeton Press},
  address = {Princeton, New Jersey},
  year = 2017,
  note = {ISBN:9780691175218}
}