TY - JOUR
T1 - User Access Control in Open Radio Access Networks
T2 - A Federated Deep Reinforcement Learning Approach
AU - Cao, Yang
AU - Lien, Shao Yu
AU - Liang, Ying Chang
AU - Chen, Kwang Cheng
AU - Shen, Xuemin
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - Targeting at implementing the next generation radio access networks (RANs) with virtualized network components, the open RAN (O-RAN) has been regarded as a novel paradigm towards fully open, virtualized and interoperable RANs. Through particularly introducing RAN intelligent controllers (RICs), machine learning (ML) can be unprecedentedly installed, adapting to various vertical applications and deployment environments without sophisticated planning efforts. However, the O-RAN also suffers two critical challenges of load balancing and frequent handovers in the massive base station (BS) deployment. In this paper, an intelligent user access control scheme with deep reinforcement learning (DRL) is proposed. To optimize the performance of distributed deep Q-networks (DQNs) trained by user equipments (UEs), a federated DRL-based scheme is proposed with a global model server installed in the RIC to update the DQN parameters. To further predictively train a global DQN with acceptable signaling overheads, the upper confidence bound (UCB) algorithm to select the optimal UE set and a dueling structure to decompose the DQN parameters are developed. With the proposed scheme, each UE effectively maximizes the long-term throughput and avoids frequent handovers. The simulation results well justify the outstanding performance of the proposed scheme over the-state-of-the-arts, to serve as references for the O-RAN standardization.
AB - Targeting at implementing the next generation radio access networks (RANs) with virtualized network components, the open RAN (O-RAN) has been regarded as a novel paradigm towards fully open, virtualized and interoperable RANs. Through particularly introducing RAN intelligent controllers (RICs), machine learning (ML) can be unprecedentedly installed, adapting to various vertical applications and deployment environments without sophisticated planning efforts. However, the O-RAN also suffers two critical challenges of load balancing and frequent handovers in the massive base station (BS) deployment. In this paper, an intelligent user access control scheme with deep reinforcement learning (DRL) is proposed. To optimize the performance of distributed deep Q-networks (DQNs) trained by user equipments (UEs), a federated DRL-based scheme is proposed with a global model server installed in the RIC to update the DQN parameters. To further predictively train a global DQN with acceptable signaling overheads, the upper confidence bound (UCB) algorithm to select the optimal UE set and a dueling structure to decompose the DQN parameters are developed. With the proposed scheme, each UE effectively maximizes the long-term throughput and avoids frequent handovers. The simulation results well justify the outstanding performance of the proposed scheme over the-state-of-the-arts, to serve as references for the O-RAN standardization.
KW - deep Q-networks (DQNs)
KW - deep reinforcement learning (DRL)
KW - federated learning (FL)
KW - Open radio access networks (O-RANs)
KW - RAN intelligent controller (RIC)
KW - user access control
UR - http://www.scopus.com/inward/record.url?scp=85118653811&partnerID=8YFLogxK
U2 - 10.1109/TWC.2021.3123500
DO - 10.1109/TWC.2021.3123500
M3 - Article
AN - SCOPUS:85118653811
SN - 1536-1276
VL - 21
SP - 3721
EP - 3736
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 6
ER -