Mean-field game theory is the study of strategic decision making by small interacting
agents in very large populations. It lies at the intersection of game theory with stochastic analysis and control theory. The use of the term "mean field" is inspired by
mean-field theory in physics, which considers the behavior of systems of large numbers of particles where individual particles have negligible impacts upon the system. In other words, each agent acts according to his minimization or maximization problem taking into account other agents’ decisions and because their population is large we can assume the number of agents goes to infinity and a representative agent exists.
In traditional game theory, the subject of study is usually a game with two players and discrete time space, and extends the results to more complex situations by induction. However, for games in continuous time with continuous states (differential games or stochastic differential games) this strategy cannot be used because of the complexity that the dynamic interactions generate. On the other hand with MFGs we can handle large numbers of players through the mean representative agent and at the same time describe complex state dynamics.
This class of problems was considered in the economics literature by
Boyan Jovanovic
Boyan Jovanovic is a professor of economics at New York University and a long-term consultant at the Federal Reserve Bank of Richmond.
Jovanovic, of Serbian descent, received his Bachelor's and master's degrees from the London School of Economi ...
and
Robert W. Rosenthal, in the engineering literature by Minyi Huang, Roland Malhame, and
Peter E. Caines
Peter Edwin Caines, FRSC is a control theorist and James McGill Professor and Macdonald Chair in Department of Electrical and Computer Engineering at McGill University, Montreal, Quebec, Canada, which he joined in 1980.
He is a Fellow of the IEE ...
and independently and around the same time by mathematicians and
Pierre-Louis Lions.
In continuous time a mean-field game is typically composed of a
Hamilton–Jacobi–Bellman equation that describes the
optimal control
Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
problem of an individual and a
Fokker–Planck equation
In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as ...
that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean-field games is the limit as
of an ''N''-player
Nash equilibrium
In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equili ...
.
A related concept to that of mean-field games is "mean-field-type control". In this case, a
social planner controls the distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as a dual adjoint Hamilton–Jacobi–Bellman equation coupled with
Kolmogorov equation. Mean-field-type game theory is the multi-agent generalization of the single-agent mean-field-type control.
General Form of a Mean-field Game
The following system of equations can be used to model a typical Mean-field game:
The basic dynamics of this set of Equations can be explained by an average agent's optimal control problem. In a mean-field game, an average agent can control their movement
to influence the population's overall location by:
where
is a parameter and
is a standard Brownian motion. By controlling their movement, the agent aims to minimize their overall expected cost
throughout the time period