Publication detail

Nash Q-learning agents in Hotelling's model: Reestablishing equilibrium

Author(s): PhDr. Jiří Kukačka Ph.D., Jan Vainer
Type: Submissions
Year: 2019
Number: 0
ISSN / ISBN:
Published in: SSRN Working Paper, DOI, revise&resubmit in COMMUN NONLINEAR SCI
Publishing place:
Keywords: Hotelling's location model, Agent-based simulation, Reinforcement learning, Nash Q-learning
JEL codes: C61, C63, C72, L13, R30
Suggested Citation:
Grants: PRIMUS/19/HUM/17 2019-2021 Behavioral finance and macroeconomics: New insights for the mainstream
Abstract: This paper examines the behavior of adaptive agents in Hotelling's location model. We conduct an agent-based simulation under the Hotelling's setting with two agents who use the Nash Q-learning mechanism for adaptation. This allows us to explore what alternations of results this technique brings in comparison to the original analytic solution of the famous game-theoretic model with strong assumptions imposed on players. We discover that under Nash Q-learning and quadratic consumer cost function, agents with high enough valuation of future profits learn behavior similar to aggressive market strategy, where both agents make similar products and lead a price war in order to eliminate their opponent from the market. This behavior closely resembles the Principle of Minimum Differentiation from Hotelling's original paper with linear consumer costs, although quadratic consumer cost functions are used in the simulation which would otherwise result in the maximum differentiation of the production in the original model. Our results thus suggest that the Principle of Minimum Differentiation can be justified based on repeated interaction of the agents and long-run optimization.

22

January

January 2021
MonTueWedThuFriSatSun
    123
45678910
11121314151617
1819202122

2324
25262728293031

Partners

Deloitte

Sponsors

CRIF
McKinsey
Patria Finance