TY - GEN
T1 - Computation Offloading Algorithm Based on Deep Reinforcement Learning and Multi-Task Dependency for Edge Computing
AU - Lin, Tengxiang
AU - Lin, Cheng Kuan
AU - Chen, Zhen
AU - Cheng, Hongju
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2022
Y1 - 2022
N2 - Edge computing is an emerging promising computing paradigm that brings computation and storage resources to the network edge, significantly reducing service latency. In this paper, we aim to divide the task into several sub-tasks through its inherent interrelation, guided by the idea of high concurrency for synchronization, and then offload sub-tasks to other edge servers so that they can be processed to minimize the cost. Furthermore, we propose a DRL-based Multi-Task Dependency Offloading Algorithm (MTDOA) to solve challenges caused by dependencies between sub-tasks and dynamic working scenes. Firstly, we model the Markov decision process as the task offloading decision. Then, we use the graph attention network to extract the dependency information of different tasks and combine Long Short-term Memory (LSTM) with Deep Q Network (DQN) to deal with time-dependent problems. Finally, simulation experiments demonstrate that the proposed algorithm boasts good convergence ability and is superior to several other baseline algorithms, proving this algorithm’s effectiveness and reliability.
AB - Edge computing is an emerging promising computing paradigm that brings computation and storage resources to the network edge, significantly reducing service latency. In this paper, we aim to divide the task into several sub-tasks through its inherent interrelation, guided by the idea of high concurrency for synchronization, and then offload sub-tasks to other edge servers so that they can be processed to minimize the cost. Furthermore, we propose a DRL-based Multi-Task Dependency Offloading Algorithm (MTDOA) to solve challenges caused by dependencies between sub-tasks and dynamic working scenes. Firstly, we model the Markov decision process as the task offloading decision. Then, we use the graph attention network to extract the dependency information of different tasks and combine Long Short-term Memory (LSTM) with Deep Q Network (DQN) to deal with time-dependent problems. Finally, simulation experiments demonstrate that the proposed algorithm boasts good convergence ability and is superior to several other baseline algorithms, proving this algorithm’s effectiveness and reliability.
KW - Computation offloading
KW - Deep reinforcement learning
KW - Dependency
KW - Edge computing
KW - Multiple tasks
UR - http://www.scopus.com/inward/record.url?scp=85150950057&partnerID=8YFLogxK
U2 - 10.1007/978-981-19-9582-8_10
DO - 10.1007/978-981-19-9582-8_10
M3 - Conference contribution
AN - SCOPUS:85150950057
SN - 9789811995811
T3 - Communications in Computer and Information Science
SP - 111
EP - 122
BT - New Trends in Computer Technologies and Applications - 25th International Computer Symposium, ICS 2022, Proceedings
A2 - Hsieh, Sun-Yuan
A2 - Hung, Ling-Ju
A2 - Peng, Sheng-Lung
A2 - Klasing, Ralf
A2 - Lee, Chia-Wei
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Computer Symposium on New Trends in Computer Technologies and Applications, ICS 2022
Y2 - 15 December 2022 through 17 December 2022
ER -