Low-Rate Universal Vector Quantization for Federated Learning

Guan Huei Lyu*, Bagus Aris Saputra, Stefano Rini, Chung Hsuan Sun*, Shih Chun Lin*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning (FL) is a distributed training paradigm in which the training of a machine learning model is coordinated by a central parameter server (PS) while the data is distributed across multiple edge devices. Thus, FL has received considerable attention as it allows for the training of models while providing security and privacy. In practice, the performance bottleneck is the link capacity from each edge device to the PS. To satisfy stringent link capacity constraints, model updates need to be compressed rather aggressively at the edge devices. In this paper, we propose a low-rate universal vector quantizer that can attain low or even fractional-rate compression. Our scheme consists of two steps: (i) model update pre-processing and (ii) vector quantization using a universal trellis coded quantizer (TCQ). In the pre-processing steps, model updates are sparsified and scaled, so as to match the TCQ design. Then, the quantization step uses TCQ, which allows for a fractional compression rate and has a flexible input size so that it can be adapted to the different neural network layers. The simulations show that our vector quantization can save 75% link capacity and still have competing accuracy, as compared with the other compressors proposed in the literature.

Original languageEnglish
Title of host publication2024 IEEE International Conference on Communications Workshops, ICC Workshops 2024
EditorsMatthew Valenti, David Reed, Melissa Torres
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1407-1412
Number of pages6
ISBN (Electronic)9798350304053
DOIs
StatePublished - 2024
Event2024 Annual IEEE International Conference on Communications Workshops, ICC Workshops 2024 - Denver, United States
Duration: 9 Jun 202413 Jun 2024

Publication series

Name2024 IEEE International Conference on Communications Workshops, ICC Workshops 2024

Conference

Conference2024 Annual IEEE International Conference on Communications Workshops, ICC Workshops 2024
Country/TerritoryUnited States
CityDenver
Period9/06/2413/06/24

Keywords

  • Federated learning
  • deep learning
  • low-rate quantization
  • sparsification
  • universal quantization
  • vector quantization

Fingerprint

Dive into the research topics of 'Low-Rate Universal Vector Quantization for Federated Learning'. Together they form a unique fingerprint.

Cite this