Knowledge Caching for Federated Learning

Xin Ying Zheng, Ming Chun Lee, Y. W.Peter Hong

研究成果: Conference article同行評審

2 引文 斯高帕斯(Scopus)

摘要

This work examines a novel wireless content distribution problem where machine learning models (e.g., deep neural networks) are cached at local small cell base-stations to facilitate access by users within their coverage. The models are trained by federated learning procedures which allow local users to collaboratively train the models using their locally stored data. Upon the completion of training, the model can also be accessed by all other users depending on their application demand. Different from conventional wireless caching problems, the placement of machine learning models should depend not only on the users' preferences but also on the data available at the users and their channel conditions. In this work, we propose to jointly optimize the caching decision, user selection, and wireless resource allocation, including transmit powers and bandwidth of the selected users, to minimize a training error bound. The problem is reduced to minimizing a weighted sum of local dataset sizes subject to constraints on the cache storage capacity, the communication and computation latency, and the total energy consumption. We first derive the minimum loss achievable for each cached model, and, then, determine the optimal models to cache by solving an equivalent 0-1 Knapsack problem that minimizes the total average loss. Simulations show that the proposed scheme can achieve lower extremity error bounds compared to preference-only and random caching policies.

原文English
期刊Proceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
出版狀態Published - 2021
事件2021 IEEE Global Communications Conference, GLOBECOM 2021 - Madrid, 西班牙
持續時間: 7 12月 202111 12月 2021

指紋

深入研究「Knowledge Caching for Federated Learning」主題。共同形成了獨特的指紋。

引用此