Publication detail

Nash Q-learning agents in Hotelling's model: Reestablishing equilibrium

Author(s): PhDr. Jiří Kukačka Ph.D., Jan Vainer
Type: Articles in journals with impact factor
Year: 2021
Number: 0
ISSN / ISBN:
Published in: Communications in Nonlinear Science and Numerical Simulation, 99, 105805, DOI
Publishing place:
Keywords: Hotelling's location model, agent-based simulation, reinforcement learning, Nash Q-learning
JEL codes: C61, C63, C72, L13, R30
Suggested Citation: Vainer, J., Kukacka, L. (2021). Nash Q-learning agents in Hotelling's model: Reestablishing equilibrium. Communications in Nonlinear Science and Numerical Simulation, 99, 105805.
Grants: PRIMUS/19/HUM/17 2019-2021 Behavioral finance and macroeconomics: New insights for the mainstream
Abstract: This paper examines adaptive agents' behavior in a stochastic dynamic version of the Hotelling's location model. We conduct an agent-based numerical simulation under the Hotelling's setting with two agents who use the Nash Q-learning mechanism for adaptation. This allows exploring what alternations this technique brings compared to the original analytic solution of the famous static game-theoretic model with strong assumptions imposed on players. We discover that under the Nash Q-learning and quadratic consumer cost function, agents with high enough valuation of future profits learn behavior similar to aggressive market strategy. Both agents make similar products and lead a price war to eliminate their opponent from the market. This behavior closely resembles the Principle of Minimum Differentiation from Hotelling's original paper with linear consumer costs. However, the quadratic consumer cost function would otherwise result in the maximum differentiation of production in the original model. Thus, the Principle of Minimum Differentiation can be justified based on repeated interactions of the agents and long-run optimization.

Partners

Deloitte

Sponsors

CRIF
McKinsey
Patria Finance