Analyzing Decision-Making in Deep-Q Reinforcement Learning for Trading: A Case Study on Tesla Company and its Supply Chain
Analyzing Decision-Making in Deep-Q Reinforcement Learning for Trading: A Case Study on Tesla Company and its Supply Chain
Authors: |
Karel Janda |
---|---|
Published in: | IES Working Papers 40/2024 |
Keywords: |
Electric Vehicle Supply Chain, Algorithmic Trading, Machine Learning, Q-Reinforcement Learning, Interpretability |
JEL codes: |
G17, Q42, C45, Q55 |
Suggested citation: |
Janda K., Petit M. (2024): " Analyzing Decision-Making in Deep-Q Reinforcement Learning for Trading: A Case Study on Tesla Company and its Supply Chain " IES Working Papers 40/2024. IES FSV. Charles University. |
Abstract: |
This study addresses the economic rationale behind algorithmic trading in the Electric Vehicle (EV) sector, enhancing the interpretability of Q-learning agents. By integrating EV-specific data, such as Tesla’s stock fundamentals and key supply chain players such as Albemarle and Panasonic Holdings Corporation, this paper uses a Q-Reinforcement Learning (Q-RL) framework to generate a profitable trading agent. The agent’s decisions are analyzed and interpreted using a decision tree to reveal the influence of supply chain dynamics. Tested on a holdout period, the agent achieves monthly profitability above a 2% threshold. The agent shows sensitivity to supply chain instability and identifies potential disruptions impacting Tesla by treating supplier stock movements as proxies for broader economic and market conditions. Indirectly, this approach improves understanding and trust in Q-RL-based algorithmic trading within the EV market. |
Download: | wp_2024_40_janda, petit |