TY - JOUR
T1 - Deep Reinforcement Learning-Based Drone Base Station Deployment for Wireless Communication Services
AU - Tarekegn, Getaneh Berie
AU - Juang, Rong Terng
AU - Lin, Hsin Piao
AU - Munaye, Yirga Yayeh
AU - Wang, Li Chun
AU - Bitew, Mekuanint Agegnehu
N1 - Publisher Copyright:
IEEE
PY - 2022
Y1 - 2022
N2 - Over the last few years, drone base station technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and dynamic mobility features. This paper focuses on the 3D mobility control of the drone base station to boost transmission coverage and network connectivity. We propose a dynamic and scalable control strategy for drone mobility using deep reinforcement learning (DRL). The design goal is to maximize communication coverage and network connectivity for multiple real-time users over a time horizon. The proposed method functions according to the received signals of mobile users, without the information of user locations. It is divided into two hierarchical stages. Firstly, a time-series convolutional neural network (CNN)-based link quality estimation model is used to determine the link quality at each timeslot. Secondly, a deep Q-learning algorithm is applied to control the movement of the drone base station in hotspot areas to meet user requirements. Simulation results show that the proposed method achieves significant network performance in terms of both communication coverage and network throughput in a dynamic environment, compared with Q-learning algorithm.
AB - Over the last few years, drone base station technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and dynamic mobility features. This paper focuses on the 3D mobility control of the drone base station to boost transmission coverage and network connectivity. We propose a dynamic and scalable control strategy for drone mobility using deep reinforcement learning (DRL). The design goal is to maximize communication coverage and network connectivity for multiple real-time users over a time horizon. The proposed method functions according to the received signals of mobile users, without the information of user locations. It is divided into two hierarchical stages. Firstly, a time-series convolutional neural network (CNN)-based link quality estimation model is used to determine the link quality at each timeslot. Secondly, a deep Q-learning algorithm is applied to control the movement of the drone base station in hotspot areas to meet user requirements. Simulation results show that the proposed method achieves significant network performance in terms of both communication coverage and network throughput in a dynamic environment, compared with Q-learning algorithm.
KW - Atmospheric modeling
KW - Base stations
KW - Channel estimation
KW - communication coverage
KW - convolutional neural network
KW - DBS mobility control
KW - deep reinforcement learning
KW - Drones
KW - Estimation
KW - network connectivity
KW - Satellite broadcasting
KW - Three-dimensional displays
KW - Wireless communication
UR - http://www.scopus.com/inward/record.url?scp=85132725161&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2022.3182633
DO - 10.1109/JIOT.2022.3182633
M3 - Article
AN - SCOPUS:85132725161
SN - 2327-4662
SP - 1
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
ER -