Model Pruning for Wireless Federated Learning with Heterogeneous Channels and Devices

Da Wei Wang*, Chi Kai Hsieh, Kun Lin Chan, Feng Tsun Chien

*此作品的通信作者

研究成果: Conference contribution同行評審

摘要

Federated learning (FL) enables distributed model training, ensuring user privacy and reducing communication overheads. Model pruning further improves learning efficiency by removing weight connections in neural networks, increasing inference speed and reducing model storage size. While a larger pruning ratio shortens latency in each communication round, a larger number of communication rounds is needed for convergence. In this work, a training-based pruning ratio decision policy is proposed for wireless federated learning. By jointly minimizing average gradients and training latency with a given specific time budget, we optimize the pruning ratio for each device and the total number of training rounds. Numerical results demonstrate that the proposed algorithm achieves a faster convergence rate and lower latency compared to the existing approach.

原文English
主出版物標題Proceedings - 2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023
發行者Institute of Electrical and Electronics Engineers Inc.
ISBN(電子)9798350316803
DOIs
出版狀態Published - 2023
事件2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023 - Tainan City, Taiwan
持續時間: 23 8月 202325 8月 2023

出版系列

名字Proceedings - 2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023

Conference

Conference2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023
國家/地區Taiwan
城市Tainan City
期間23/08/2325/08/23

指紋

深入研究「Model Pruning for Wireless Federated Learning with Heterogeneous Channels and Devices」主題。共同形成了獨特的指紋。

引用此