Hierarchical Reinforcement Learning with Guidance for Multi-Domain Dialogue Policy

Mahdin Rohmatillah, Jen Tzung Chien*


研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)


Achieving high performance in a multi-domain dialogue system with low computation is undoubtedly challenging. Previous works applying an end-To-end approach have been very successful. However, the computational cost remains a major issue since the large-sized language model using GPT-2 is required. Meanwhile, the optimization for individual components in the dialogue system has not shown promising result, especially for the component of dialogue management due to the complexity of multi-domain state and action representation. To cope with these issues, this article presents an efficient guidance learning where the imitation learning and the hierarchical reinforcement learning (HRL) with human-in-The-loop are performed to achieve high performance via an inexpensive dialogue agent. The behavior cloning with auxiliary tasks is exploited to identify the important features in latent representation. In particular, the proposed HRL is designed to treat each goal of a dialogue with the corresponding sub-policy so as to provide efficient dialogue policy learning by utilizing the guidance from human through action pruning and action evaluation, as well as the reward obtained from the interaction with the simulated user in the environment. Experimental results on ConvLab-2 framework show that the proposed method achieves state-of-The-Art performance in dialogue policy optimization and outperforms the GPT-2 based solutions in end-To-end system evaluation.

頁(從 - 到)748-761
期刊IEEE/ACM Transactions on Audio Speech and Language Processing
出版狀態Published - 2023


深入研究「Hierarchical Reinforcement Learning with Guidance for Multi-Domain Dialogue Policy」主題。共同形成了獨特的指紋。