Model Pruning for Wireless Federated Learning with Heterogeneous Channels and Devices

Da Wei Wang*, Chi Kai Hsieh, Kun Lin Chan, Feng Tsun Chien

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning (FL) enables distributed model training, ensuring user privacy and reducing communication overheads. Model pruning further improves learning efficiency by removing weight connections in neural networks, increasing inference speed and reducing model storage size. While a larger pruning ratio shortens latency in each communication round, a larger number of communication rounds is needed for convergence. In this work, a training-based pruning ratio decision policy is proposed for wireless federated learning. By jointly minimizing average gradients and training latency with a given specific time budget, we optimize the pruning ratio for each device and the total number of training rounds. Numerical results demonstrate that the proposed algorithm achieves a faster convergence rate and lower latency compared to the existing approach.

Original languageEnglish
Title of host publicationProceedings - 2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350316803
DOIs
StatePublished - 2023
Event2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023 - Tainan City, Taiwan
Duration: 23 Aug 202325 Aug 2023

Publication series

NameProceedings - 2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023

Conference

Conference2023 VTS Asia Pacific Wireless Communications Symposium, APWCS 2023
Country/TerritoryTaiwan
CityTainan City
Period23/08/2325/08/23

Fingerprint

Dive into the research topics of 'Model Pruning for Wireless Federated Learning with Heterogeneous Channels and Devices'. Together they form a unique fingerprint.

Cite this