HOME

TheInfoList



OR:

Jean-François Mertens (11 March 1946 – 17 July 2012) was a Belgian game theorist and mathematical economist. Mertens contributed to economic theory in regards to order-book of market games, cooperative games, noncooperative games, repeated games, epistemic models of strategic behavior, and refinements of
Nash equilibrium In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equili ...
(see
solution concept In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most comm ...
). In cooperative game theory he contributed to the solution concepts called the
core Core or cores may refer to: Science and technology * Core (anatomy), everything except the appendages * Core (manufacturing), used in casting and molding * Core (optical fiber), the signal-carrying portion of an optical fiber * Core, the centra ...
and the
Shapley value The Shapley value is a solution concept in cooperative game theory. It was named in honor of Lloyd Shapley, who introduced it in 1951 and won the Nobel Memorial Prize in Economic Sciences for it in 2012. To each cooperative game it assigns a uniq ...
. Regarding
repeated game In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a p ...
s and stochastic games, Mertens 1982 and 1986 survey articles, and his 1994 survey co-authored with Sylvain Sorin and Shmuel Zamir, are compendiums of results on this topic, including his own contributions. Mertens also made contributions to probability theory and published articles on elementary topology.


Epistemic models

Mertens and Zamir implemented
John Harsanyi John Charles Harsanyi ( hu, Harsányi János Károly; May 29, 1920 – August 9, 2000) was a Hungarian-American economist and the recipient of the Nobel Memorial Prize in Economic Sciences in 1994. He is best known for his contributions to the ...
's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. They constructed a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace, which is the usual tactic in applications.


Repeated games with incomplete information

Repeated games In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied list of games in game theory, 2-person games. Repeated ga ...
with incomplete information, were pioneered by Aumann and Maschler. Two of Jean-François Mertens's contributions to the field are the extensions of repeated two person zero-sum games with incomplete information on both sides for both (1) the type of information available to players and (2) the signalling structure. * (1) Information: Mertens extended the theory from the independent case where the private information of the players is generated by independent random variables, to the dependent case where correlation is allowed. * (2) Signalling structures: the standard signalling theory where after each stage both players are informed of the previous moves played, was extended to deal with general signalling structure where after each stage each player gets a private signal that may depend on the moves and on the state. In those set-ups Jean-François Mertens provided an extension of the characterization of the
minmax Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for ''mini''mizing the possible loss for a worst case (''max''imum loss) scenario. When de ...
and maxmin value for the infinite game in the dependent case with state independent signals. Additionally with Shmuel Zamir, Jean-François Mertens showed the existence of a limiting value. Such a value can be thought either as the limit of the values v_n of the n stage games, as n goes to infinity, or the limit of the values v_ of the -discounted games, as agents become more patient and \to 1. A building block of Mertens and Zamir's approach is the construction of an operator, now simply referred to as the MZ operator in the field in their honor. In continuous time ( differential games with incomplete information), the MZ operator becomes an infinitesimal operator at the core of the theory of such games. Unique solution of a pair of functional equations, Mertens and Zamir showed that the limit value may be a transcendental function unlike the maxmin or the minmax (value in the complete information case). Mertens also found the exact rate of convergence in the case of game with incomplete information on one side and general signalling structure. A detailed analysis of the speed of convergence of the ''n''-stage game (finitely repeated) value to its limit has profound links to the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
and the normal law, as well as the maximal variation of bounded martingales. Attacking the study of the difficult case of games with state dependent signals and without recursive structure, Mertens and Zamir introduced new tools on the introduction based on an auxiliary game, reducing down the set of strategies to a core that is 'statistically sufficient.' Collectively Jean-François Mertens's contributions with Zamir (and also with Sorin) provide the foundation for a general theory for two person zero sum repeated games that encompasses stochastic and incomplete information aspects and where concepts of wide relevance are deployed as for example reputation, bounds on rational levels for the payoffs, but also tools like splitting lemma, signalling and approachability. While in many ways Mertens's work here goes back to the von Neumann original roots of game theory with a zero-sum two person set up, vitality and innovations with wider application have been pervasive.


Stochastic games

Stochastic games were introduced by
Lloyd Shapley Lloyd Stowell Shapley (; June 2, 1923 – March 12, 2016) was an American mathematician and Nobel Prize-winning economist. He contributed to the fields of mathematical economics and especially game theory. Shapley is generally considered one of ...
in 1953. The first paper studied the discounted two-person zero-sum stochastic game with finitely many states and actions and demonstrates the existence of a value and stationary optimal strategies. The study of the undiscounted case evolved in the following three decades, with solutions of special cases by Blackwell and Ferguson in 1968 and Kohlberg in 1974. The existence of an undiscounted value in a very strong sense, both a uniform value and a limiting average value, was proved in 1981 by Jean-François Mertens and Abraham Neyman. The study of the non-zero-sum with a general state and action spaces attracted much attention, and Mertens and Parthasarathy proved a general existence result under the condition that the transitions, as a function of the state and actions, are norm continuous in the actions.


Market games: limit price mechanism

Mertens had the idea to use linear competitive economies as an
order book (trading) An order book is the list of orders (manual or electronic) that a trading venue (in particular stock exchanges) uses to record the interest of buyers and sellers in a particular financial instrument. A matching engine uses the book to determine ...
to model limit orders and generalize
double auction A double auction is a process of buying and selling goods with multiple sellers and multiple buyers. Potential buyers submit their bids and potential sellers submit their ask prices to the market institution, and then the market institution choose ...
s to a multivariate set up. Acceptable relative prices of players are conveyed by their linear preferences, money can be one of the goods and it is ok for agents to have positive marginal utility for money in this case (after all agents are really just orders!). In fact this is the case for most order in practice. More than one order (and corresponding order-agent) can come from same actual agent. In equilibrium good sold must have been at a relative price compared to the good bought no less than the one implied by the utility function. Goods brought to the market (quantities in the order) are conveyed by initial endowments. Limit order are represented as follows: the order-agent brings one good to the market and has non zero marginal utilities in that good and another one (money or numeraire). An ''at market'' sell order will have a zero utility for the good sold ''at market'' and positive for money or the numeraire. Mertens clears orders creating a
matching engine An order matching system or simply matching system is an electronic system that matches buy and sell orders for a stock market, commodity market or other financial exchange. The order matching system is the core of all electronic exchanges and ...
by using the competitive equilibrium – in spite of most usual interiority conditions being violated for the auxiliary linear economy. Mertens's mechanism provides a generalization of Shapley–Shubik trading posts and has the potential of a real life implementation with limit orders across markets rather than with just one specialist in one market.


Shapley value

The diagonal formula in the theory of non-atomic cooperatives games elegantly attributes the
Shapley value The Shapley value is a solution concept in cooperative game theory. It was named in honor of Lloyd Shapley, who introduced it in 1951 and won the Nobel Memorial Prize in Economic Sciences for it in 2012. To each cooperative game it assigns a uniq ...
of each infinitesimal player as his marginal contribution to the worth of a perfect sample of the population of players when averaged over all possible sample sizes. Such a marginal contribution has been most easily expressed in the form of a derivative—leading to the diagonal formula formulated by Aumann and Shapley. This is the historical reason why some differentiability conditions have been originally required to define Shapley value of non-atomic cooperative games. But first exchanging the order of taking the "average over all possible sample sizes" and taking such a derivative, Jean-François Mertens uses the smoothing effect of such an averaging process to extend the applicability of the diagonal formula. This trick alone works well for majority games (represented by a step function applied on the percentage of population in the coalition). Exploiting even further this commutation idea of taking averages before taking derivative, Jean-François Mertens expends by looking at invariant transformations and taking averages over those, before taking the derivative. Doing so, Mertens expends the diagonal formula to a much larger space of games, defining a Shapley value at the same time.


Refinements and Mertens-stable equilibria

Solution concepts that are refinements of Nash equilibrium have been motivated primarily by arguments for backward induction and forward induction.
Backward induction Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying wha ...
posits that a player's optimal action now anticipates the optimality of his and others' future actions. The refinement called
subgame perfect equilibrium In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every ...
implements a weak version of backward induction, and increasingly stronger versions are
sequential equilibrium Sequential equilibrium is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the player ...
, perfect equilibrium,
quasi-perfect equilibrium Quasi-perfect equilibrium is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme. Informally, a player playing by a strategy from a quasi-perfect equilibrium takes observed as well as potential future mistakes of his ...
, and
proper equilibrium Proper equilibrium is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significant ...
, where the latter three are obtained as limits of perturbed strategies. Forward induction posits that a player's optimal action now presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. In particular since completely mixed Nash equilibrium are sequential – such equilibria when they exist satisfy both forward and backward induction. In his work Mertens manages for the first time to select Nash equilibria that satisfy both forward and backward induction. The method is to let such feature be inherited from perturbed games that are forced to have completely mixed strategies—and the goal is only achieved with Mertens-stable equilibria, not with the simpler Kohlberg Mertens equilibria. Elon Kohlberg and Mertens emphasized that a solution concept should be consistent with an
admissible decision rule In statistical decision theory, an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it (or at least sometimes better and never worse), in the precise sense of "better" defined ...
. Moreover, it should satisfy the ''invariance'' principle that it should not depend on which among the many equivalent representations of the strategic situation as an
extensive-form game An extensive-form game is a specification of a game in game theory, allowing (as the name suggests) for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, th ...
is used. In particular, it should depend only on the reduced normal form of the game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens emphasized also the importance of the ''small worlds'' principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs. Kohlberg and Mertens defined tentatively a set-valued solution concept called stability for games with finite numbers of pure strategies that satisfies admissibility, invariance and forward induction, but a counterexample showed that it need not satisfy backward induction; viz. the set might not include a sequential equilibrium. Subsequently, Mertens defined a refinement, also called stability and now often called a set of Mertens-stable equilibria, that has several desirable properties: * Admissibility and Perfection: All equilibria in a stable set are perfect, hence admissible. * Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set. * Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs. * Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents. For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game. A stable set is defined mathematically by (in brief) essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition entails more than the property that every nearby game has a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.


Social choice theory and relative utilitarianism

A
social welfare function In welfare economics, a social welfare function is a function that ranks social states (alternative complete descriptions of the society) as less desirable, more desirable, or indifferent for every possible pair of social states. Inputs of the fu ...
(SWF) maps profiles of individual preferences to social preferences over a fixed set of alternatives. In a seminal paper
Arrow An arrow is a fin-stabilized projectile launched by a bow. A typical arrow usually consists of a long, stiff, straight shaft with a weighty (and usually sharp and pointed) arrowhead attached to the front end, multiple fin-like stabilizers c ...
(1950)Arrow, K.J., "A Difficulty in the Concept of Social Welfare", Journal of Political Economy 58(4) (August, 1950), pp. 328–346 showed the famous "Impossibility Theorem", i.e. there does not exist an SWF that satisfies a very minimal system of axioms: ''Unrestricted Domain'', ''Independence of Irrelevant Alternatives'', the ''Pareto criterion'' and ''Non-dictatorship''. A large literature documents various ways to relax Arrow's axioms to get possibility results. Relative Utilitarianism (RU) (Dhillon and Mertens, 1999)Dhillon, A. and J.F.Mertens, "Relative Utilitarianism", Econometrica 67,3 (May 1999) 471–498 is a SWF that consists of normalizing individual utilities between 0 and 1 and adding them, and is a "possibility" result that is derived from a system of axioms that are very close to Arrow's original ones but modified for the space of preferences over lotteries. Unlike classical Utilitarianism, RU does not assume cardinal utility or interpersonal comparability. Starting from individual preferences over lotteries, which are assumed to satisfy the von-Neumann–Morgenstern axioms (or equivalent), the axiom system uniquely fixes the interpersonal comparisons. The theorem can be interpreted as providing an axiomatic foundation for the "right" interpersonal comparisons, a problem that has plagued
social choice theory Social choice theory or social choice is a theoretical framework for analysis of combining individual opinions, preferences, interests, or welfares to reach a ''collective decision'' or ''social welfare'' in some sense.Amartya Sen (2008). "Soci ...
for a long time. The axioms are: *''Individualism:'' If all individuals are indifferent between all alternatives then so is society, *''Non Triviality:'' The SWF is not constantly totally indifferent between all alternatives, *''No Ill will'': It is not true that when all individuals but one are totally indifferent then society's preferences are opposite to his, * ''Anonymity:'' A permutation of all individuals leaves the social preferences unchanged. *''Independence of Redundant Alternatives:'' This axiom restricts Arrow's Independence of Irrelevant Alternatives (IIA) to the case where both before and after the change, the "irrelevant" alternatives are lotteries on the other alternatives. * ''Monotonicity'' is much weaker than the following "good will axiom": Consider two lotteries p and q and two preference profiles which coincide for all individuals except i, i is indifferent between p and q on the first profile but strictly prefers p to q in the second profile, then society strictly prefers p to q in the second profile as well. * Finally the ''Continuity'' axiom is basically a closed graph property taking the strongest possible convergence for preference profiles. The main theorem shows that RU satisfies all the axioms and if the number of individuals is bigger than three, number of candidates is bigger than 5 then any SWF satisfying the above axioms is equivalent to RU, whenever there exist at least 2 individuals who do not have exactly the same or exactly the opposite preferences.


Intergenerational equity in policy evaluation

Relative utilitarianism In social choice and operations research, the utilitarian rule (also called the max-sum rule) is a rule saying that, among all possible alternatives, society should pick the alternative which maximizes the ''sum of the utilities'' of all individual ...
can serve to rationalize using 2% as an intergenerationally fair social discount rate for cost-benefit analysis. Mertens and Rubinchik show that a shift-invariant welfare function defined on a rich space of (temporary) policies, if differentiable, has as a derivative a discounted sum of the policy (change), with a fixed discount rate, i.e., the induced social discount rate. (Shift-invariance requires a function evaluated on a shifted policy to return an affine transformation of the value of the original policy, while the coefficients depend on the time-shift only.) In an overlapping generations model with exogenous growth (with time being the whole real line), relative utilitarian function is shift-invariant when evaluated on (small temporary) policies around a balanced growth equilibrium (with capital stock growing exponentially). When policies are represented as changes in endowments of individuals (transfers or taxes), and utilities of all generations are weighted equally, the social discount rate induced by relative utilitarianism is the growth rate of per capita GDP (2% in the U.S.). This is also consistent with the current practices described in the Circular A-4 of the US Office of Management and Budget, stating: :If your rule will have important intergenerational benefits or costs you might consider a further sensitivity analysis using a lower but positive discount rate in addition to calculating net benefits using discount rates of 3 and 7 percent.


References

{{DEFAULTSORT:Mertens, Jean-Francois Belgian economists Game theorists Belgian mathematicians Université catholique de Louvain alumni Academic staff of the Université catholique de Louvain Fellows of the Econometric Society 1946 births 2012 deaths