Escaping from Zero Gradient: Revisiting Action-Constrained Reinforcement Learning via Frank-Wolfe Policy Optimization

Jyun Li Lin, Wei Hung, Shang Hsuan Yang, Ping-Chun Hsieh, Xi Liu

研究成果: Paper同行評審

3 引文 斯高帕斯(Scopus)

摘要

Action-constrained reinforcement learning (RL) is a widely-used approach in various real-world applications, such as scheduling in networked systems with resource constraints and control of a robot with kinematic constraints. While the existing projection-based approaches ensure zero constraint violation, they could suffer from the zero-gradient problem due to the tight coupling of the policy gradient and the projection, which results in sample-inefficient training and slow convergence. To tackle this issue, we propose a learning algorithm that decouples the action constraints from the policy parameter update by leveraging state-wise Frank-Wolfe and a regression-based policy update scheme. Moreover, we show that the proposed algorithm enjoys convergence and policy improvement properties in the tabular case as well as generalizes the popular DDPG algorithm for action-constrained RL in the general case. Through experiments, we demonstrate that the proposed algorithm significantly outperforms the benchmark methods on a variety of control tasks.

原文English
頁面397-407
頁數11
出版狀態Published - 27 7月 2021
事件37th Conference on Uncertainty in Artificial Intelligence, UAI 2021 - Virtual, Online
持續時間: 27 7月 202130 7月 2021

Conference

Conference37th Conference on Uncertainty in Artificial Intelligence, UAI 2021
城市Virtual, Online
期間27/07/2130/07/21

指紋

深入研究「Escaping from Zero Gradient: Revisiting Action-Constrained Reinforcement Learning via Frank-Wolfe Policy Optimization」主題。共同形成了獨特的指紋。

引用此