Deep reinforcement learning for mobile 5G and beyond: Fundamentals, applications, and challenges

Zehui Xiong, Yang Zhang*, Dusit Niyato, Ruilong Deng, Ping Wang, Li-Chun Wang

*此作品的通信作者

研究成果: Article同行評審

57 引文 斯高帕斯(Scopus)

摘要

Future-generation wireless networks (5G and beyond) must accommodate surging growth in mobile data traffic and support an increasingly high density of mobile users involving a variety of services and applications. Meanwhile, the networks become increasingly dense, heterogeneous, decentralized, and ad hoc in nature, and they encompass numerous and diverse network entities. Consequently, different objectives, such as high throughput and low latency, need to be achieved in terms of service, and resource allocation must be designed and optimized accordingly. However, considering the dynamics and uncertainty that inherently exist in wireless network environments, conventional approaches for service and resource management that require complete and perfect knowledge of the systems are inefficient or even inapplicable. Inspired by the success of machine learning in solving complicated control and decision-making problems, in this article we focus on deep reinforcement-learning (DRL)-based approaches that allow network entities to learn and build knowledge about the networks and thus make optimal decisions locally and independently. We first overview fundamental concepts of DRL and then review related works that use DRL to address various issues in 5G networks. Finally, we present an application of DRL for 5G network slicing optimization. The numerical results demonstrate that the proposed approach achieves superior performance compared with baseline solutions.

原文English
文章編號8683970
頁(從 - 到)44-52
頁數9
期刊IEEE Vehicular Technology Magazine
14
發行號2
DOIs
出版狀態Published - 1 六月 2019

指紋

深入研究「Deep reinforcement learning for mobile 5G and beyond: Fundamentals, applications, and challenges」主題。共同形成了獨特的指紋。

引用此