Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy

Yu-Ying Chen, Chiao-Ting Chen, Chuan-Yun Sang, Yao-Chun Yang, Szu-Hao Huang*

*此作品的通信作者

    研究成果: Article同行評審

    2 引文 斯高帕斯(Scopus)

    摘要

    Many researchers have incorporated deep neural networks (DNNs) with reinforcement learning (RL) in automatic trading systems. However, such methods result in complicated algorithmic trading models with several defects, especially when a DNN model is vulnerable to malicious adversarial samples. Researches have rarely focused on planning for long-term attacks against RL-based trading systems. To neutralize these attacks, researchers must consider generating imperceptible perturbations while simultaneously reducing the number of modified steps. In this research, an adversary is used to attack an RL-based trading agent. First, we propose an extension of the ensemble of the identical independent evaluators (EIIE) method, called enhanced EIIE, in which information on the best bids and asks is incorporated. Enhanced EIIE was demonstrated to produce an authoritative trading agent that yields better portfolio performance relative to that of an EIIE agent. Enhanced EIIE was then applied to the adversarial agent for the agent to learn when and how much to attack (in the form of introducing perturbations).In our experiments, our proposed adversarial attack mechanisms were > 30% more effective at reducing accumulated portfolio value relative to the conventional attack mechanisms of the fast gradient sign method (FSGM) and iterative FSGM, which are currently more commonly researched and adapted to compare and improve.

    原文English
    頁(從 - 到)50667-50685
    頁數19
    期刊IEEE Access
    9
    DOIs
    出版狀態Published - 三月 2021

    指紋

    深入研究「Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy」主題。共同形成了獨特的指紋。

    引用此