A multi-agent virtual market model for generalization in reinforcement learning based trading strategies

Fei Fan He, Chiao Ting Chen, Szu Hao Huang*

*此作品的通信作者

研究成果: Article同行評審

8 引文 斯高帕斯(Scopus)

摘要

Many studies have successfully used reinforcement learning (RL) to train an intelligent agent that learns profitable trading strategies from financial market data. Most of RL trading studies have simplified the effect of the actions of the trading agent on the market state. The trading agent is trained to maximize long-term profit by optimizing fixed historical data. However, such approach frequently results in the trading performance during out-of-sample validation being considerably different from that during training. In this paper, we propose a multi-agent virtual market model (MVMM) comprised of multiple generative adversarial networks (GANs) which cooperate with each other to reproduce market price changes. In addition, the action of the trading agent can be superimposed on the current state as the input of the MVMM to generate an action-dependent next state. In this research, real historical data were replaced with the simulated market data generated by the MVMM. The experimental results indicated that the trading strategy of the trained RL agent achieved a 12% higher profit and exhibited low risk of loss in the 2019 China Shanghai Shenzhen 300 stock index futures backtest.

原文English
文章編號109985
期刊Applied Soft Computing
134
DOIs
出版狀態Published - 2月 2023

指紋

深入研究「A multi-agent virtual market model for generalization in reinforcement learning based trading strategies」主題。共同形成了獨特的指紋。

引用此