Autoencoder-Enhanced Federated Learning with Reduced Overhead and Lower Latency

Chi Kai Hsieh*, Feng Tsun Chien*, Min Kuan Chang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper investigates the application of autoencoder (AE) in supporting the training process of federated learning by reducing communication overhead and latency. We propose a scheduling algorithm to determine when and how to use autoencoder during training. Our simulation shows that federated learning with an autoencoder significantly reduces communication overhead without compromising testing accuracy. Moreover, the testing accuracy curve shows a more consistent increase over training rounds in federated learning with an autoencoder than in federated learning without an autoencoder. Additionally, the latency of federated learning with an autoencoder is lower than that of federated learning without an autoencoder.

Original languageEnglish
Title of host publication2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2118-2123
Number of pages6
ISBN (Electronic)9798350300673
DOIs
StatePublished - 2023
Event2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023 - Taipei, Taiwan
Duration: 31 Oct 20233 Nov 2023

Publication series

Name2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023

Conference

Conference2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023
Country/TerritoryTaiwan
CityTaipei
Period31/10/233/11/23

Keywords

  • Federated learning
  • autoencoder
  • scheduling

Fingerprint

Dive into the research topics of 'Autoencoder-Enhanced Federated Learning with Reduced Overhead and Lower Latency'. Together they form a unique fingerprint.

Cite this