Mean-Field Approximation of Forward-Looking Population Dynamics

Ryota Iijima
Department of Economics, Yale University

and

Daisuke Oyama
Faculty of Economics, University of Tokyo


Abstract
We study how the equilibrium dynamics of a continuum-population game approximate those of large finite-population games. New agents stochastically arrive to replace exiting ones and make irreversible action choices to maximize the expected discounted lifetime payoffs. The key assumption is that they only observe imperfect signals about the action distribution in the population. We first show that the stochastic process of the action distribution in the finite-population game is approximated by its mean-field dynamics as the population size becomes large, where the approximation precision is uniform across all equilibria. Based on this result, we then establish continuity properties of the equilibria at the large population limit. In particular, each agent becomes almost negligible, in the sense that in equilibrium, each agent's action is almost optimal against the (incorrect) belief that it has no impact on others' actions as presumed in the continuum-population case. Finally, for binary-action supermodular games, we show that there is a unique equilibrium in the continuum-population game, and hence in the large finite-population games, when the observation noise is small and agents are patient. In this equilibrium, every agent chooses a risk-dominant action, and the population globally converges to the corresponding steady state.
Key Words: Population game dynamics, forward-looking expectation, large dynamic game, deterministic approximation, agent smallness, equilibrium selection.


This version: November 10, 2023. PDF file