RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning

Yi Ren Chen, Amir Rezapour*, Wen Guey Tzeng, Shi-Chun Tsai

*此作品的通信作者

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)

摘要

Communication networks are difficult to model and predict because they have become very sophisticated and dynamic. We develop a reinforcement learning routing algorithm (RL-Routing) to solve a traffic engineering (TE) problem of SDN in terms of throughput and delay. RL-Routing solves the TE problem via experience, instead of building an accurate mathematical model. We consider comprehensive network information for state representation and use one-To-many network configuration for routing choices. Our reward function, which uses network throughput and delay, is adjustable for optimizing either upward or downward network throughput. After appropriate training, the agent learns a policy that predicts future behavior of the underlying network and suggests better routing paths between switches. The simulation results show that RL-Routing obtains higher rewards and enables a host to transfer a large file faster than Open Shortest Path First (OSPF) and Least Loaded (LL) routing algorithms on various network topologies. For example, on the NSFNet topology, the sum of rewards obtained by RL-Routing is 119.30, whereas those of OSPF and LL are 106.59 and 74.76, respectively. The average transmission time for a 40GB file using RL-Routing is \text{25.2}~s. Those of OSPF and LL are \text{63}~s and \text{53.4}~s, respectively.

原文English
文章編號9171590
頁(從 - 到)3185-3199
頁數15
期刊IEEE Transactions on Network Science and Engineering
7
發行號4
DOIs
出版狀態Published - 1 十月 2020

指紋

深入研究「RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning」主題。共同形成了獨特的指紋。

引用此