Optimum splitting computing for DNN training through next generation smart networks: a multi-tier deep reinforcement learning approach

Shao Yu Lien, Cheng Hao Yeh, Der Jiunn Deng*

*此作品的通信作者

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)

摘要

Deep neural networks (DNNs) involving massive neural nodes grouped into different neural layers have been a promising innovation for function approximation and inference, which have been widely applied to various vertical applications such as image recognition. However, the computing burdens to train a DNN model with a limited latency may not be affordable for the user equipment (UE), which consequently motivates the concept of splitting the computations of DNN layers to not only the edge server but also the cloud platform. Despite the availability of more computing resources, computing tasks with such split computing also suffer packet transmission unreliability, latency, and significant energy consumption. A practical scheme to optimally split the computations of DNN layers to the UE, edge, and cloud is thus urgently desired. To solve this optimization, we propose a multi-tier deep reinforcement learning (DRL) scheme for the UE and edge to distributively determine the splitting points to minimize the overall training latency while meeting the constraints of overall energy consumption and image recognition accuracy. The performance evaluation results show the outstanding performance of the proposed design as compared with state-of-the-art schemes, to fully justify the practicability in the next-generation smart networks.

原文English
頁(從 - 到)1737-1751
頁數15
期刊Wireless Networks
30
發行號3
DOIs
出版狀態Published - 4月 2024

指紋

深入研究「Optimum splitting computing for DNN training through next generation smart networks: a multi-tier deep reinforcement learning approach」主題。共同形成了獨特的指紋。

引用此