TY - JOUR
T1 - Learning-Based Multitier Split Computing for Efficient Convergence of Communication and Computation
AU - Cao, Yang
AU - Lien, Shao Yu
AU - Yeh, Cheng Hao
AU - Deng, Der Jiunn
AU - Liang, Ying Chang
AU - Niyato, Dusit
N1 - Publisher Copyright:
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
PY - 2024
Y1 - 2024
N2 - With promising benefits of splitting deep neural network (DNN) computation loads to the edge server, split computing has been a novel paradigm achieving high-quality artificial intelligence (AI) services for the energy-constrained user equipments (UEs). To satisfy the service demands of a large number of UEs, traditional edge-UE split computing evolves toward multitier split computing involving the edge and cloud servers with different capabilities, leading to a 'complex' optimization involving communication and computation. To tackle this challenge, in this article, we propose a multitier deep reinforcement learning (DRL) decision-making scheme for distributed splitting point selection and computing resource allocation in the three-tier UE-edge-cloud split computing systems. With the proposed scheme, the high-dimensional optimization can be tackled by the UEs and an edge server with different control cycles through performing local decision-making tasks in a sequential manner. Based on the policies updated by the UEs and the edge server in successive stages, the overall performance of split computing can be continuously improved, which is justified through a theoretical convergence performance analysis. Comprehensive simulation studies show that the proposed multitier DRL decision-making scheme outperforms the conventional split computing schemes in terms of the overall latency, inference accuracy, and energy efficiency to practice multitier split computing.
AB - With promising benefits of splitting deep neural network (DNN) computation loads to the edge server, split computing has been a novel paradigm achieving high-quality artificial intelligence (AI) services for the energy-constrained user equipments (UEs). To satisfy the service demands of a large number of UEs, traditional edge-UE split computing evolves toward multitier split computing involving the edge and cloud servers with different capabilities, leading to a 'complex' optimization involving communication and computation. To tackle this challenge, in this article, we propose a multitier deep reinforcement learning (DRL) decision-making scheme for distributed splitting point selection and computing resource allocation in the three-tier UE-edge-cloud split computing systems. With the proposed scheme, the high-dimensional optimization can be tackled by the UEs and an edge server with different control cycles through performing local decision-making tasks in a sequential manner. Based on the policies updated by the UEs and the edge server in successive stages, the overall performance of split computing can be continuously improved, which is justified through a theoretical convergence performance analysis. Comprehensive simulation studies show that the proposed multitier DRL decision-making scheme outperforms the conventional split computing schemes in terms of the overall latency, inference accuracy, and energy efficiency to practice multitier split computing.
KW - Computing resource allocation
KW - deep reinforcement learning (DRL)
KW - multitier decision-making
KW - split computing
KW - splitting point selection
UR - http://www.scopus.com/inward/record.url?scp=85199570294&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2024.3426531
DO - 10.1109/JIOT.2024.3426531
M3 - Article
AN - SCOPUS:85199570294
SN - 2327-4662
VL - 11
SP - 33077
EP - 33096
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 20
ER -