TY - JOUR
T1 - Deep-Reinforcement-Learning-Based Drone Base Station Deployment for Wireless Communication Services
AU - Tarekegn, Getaneh Berie
AU - Juang, Rong Terng
AU - Lin, Hsin Piao
AU - Munaye, Yirga Yayeh
AU - Wang, Li Chun
AU - Bitew, Mekuanint Agegnehu
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Over the last few years, drone base station (DBS) technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and dynamic mobility features. This article focuses on the 3-D mobility control of the DBS to boost transmission coverage and network connectivity. We propose a dynamic and scalable control strategy for drone mobility using deep reinforcement learning (DRL). The design goal is to maximize communication coverage and network connectivity for multiple real-time users over a time horizon. The proposed method functions according to the received signals of mobile users, without the information of user locations. It is divided into two hierarchical stages. First, a time-series convolutional neural network (CNN)-based link quality estimation model is used to determine the link quality at each timeslot. Second, a deep $Q$ -learning algorithm is applied to control the movement of the DBS in hotspot areas to meet user requirements. Simulation results show that the proposed method achieves significant network performance in terms of both communication coverage and network throughput in a dynamic environment, compared with the $Q$ -learning algorithm.
AB - Over the last few years, drone base station (DBS) technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and dynamic mobility features. This article focuses on the 3-D mobility control of the DBS to boost transmission coverage and network connectivity. We propose a dynamic and scalable control strategy for drone mobility using deep reinforcement learning (DRL). The design goal is to maximize communication coverage and network connectivity for multiple real-time users over a time horizon. The proposed method functions according to the received signals of mobile users, without the information of user locations. It is divided into two hierarchical stages. First, a time-series convolutional neural network (CNN)-based link quality estimation model is used to determine the link quality at each timeslot. Second, a deep $Q$ -learning algorithm is applied to control the movement of the DBS in hotspot areas to meet user requirements. Simulation results show that the proposed method achieves significant network performance in terms of both communication coverage and network throughput in a dynamic environment, compared with the $Q$ -learning algorithm.
KW - Channel estimation
KW - communication coverage
KW - convolutional neural network (CNN)
KW - deep reinforcement learning (DRL)
KW - drone base station (DBS) mobility control
KW - network connectivity
UR - http://www.scopus.com/inward/record.url?scp=85132725161&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2022.3182633
DO - 10.1109/JIOT.2022.3182633
M3 - Article
AN - SCOPUS:85132725161
SN - 2327-4662
VL - 9
SP - 21899
EP - 21915
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 21
ER -