TY - CONF
T1 - Knowledge Caching for Federated Learning
AU - Zheng, Xin Ying
AU - Lee, Ming-Chun
AU - Hong, Y. -W. Peter
PY - 2021/12
Y1 - 2021/12
N2 - This work examines a novel wireless content distribution problem where machine learning models (e.g., deep neural networks) are cached at local small cell base-stations to facilitate access by users within their coverage. The models are trained by federated learning procedures which allow local users to collaboratively train the models using their locally stored data. Upon the completion of training, the model can also be accessed by all other users depending on their application demand. Different from conventional wireless caching problems, the placement of machine learning models should depend not only on the users' preferences but also on the data available at the users and their channel conditions. In this work, we propose to jointly optimize the caching decision, user selection, and wireless resource allocation, including transmit powers and bandwidth of the selected users, to minimize a training error bound. The problem is reduced to minimizing a weighted sum of local dataset sizes subject to constraints on the cache storage capacity, the communication and computation latency, and the total energy consumption. We first derive the minimum loss achievable for each cached model, and, then, determine the optimal models to cache by solving an equivalent 0–1 Knapsack problem that minimizes the total average loss. Simulations show that the proposed scheme can achieve lower extremity error bounds compared to preference-only and random caching policies.
AB - This work examines a novel wireless content distribution problem where machine learning models (e.g., deep neural networks) are cached at local small cell base-stations to facilitate access by users within their coverage. The models are trained by federated learning procedures which allow local users to collaboratively train the models using their locally stored data. Upon the completion of training, the model can also be accessed by all other users depending on their application demand. Different from conventional wireless caching problems, the placement of machine learning models should depend not only on the users' preferences but also on the data available at the users and their channel conditions. In this work, we propose to jointly optimize the caching decision, user selection, and wireless resource allocation, including transmit powers and bandwidth of the selected users, to minimize a training error bound. The problem is reduced to minimizing a weighted sum of local dataset sizes subject to constraints on the cache storage capacity, the communication and computation latency, and the total energy consumption. We first derive the minimum loss achievable for each cached model, and, then, determine the optimal models to cache by solving an equivalent 0–1 Knapsack problem that minimizes the total average loss. Simulations show that the proposed scheme can achieve lower extremity error bounds compared to preference-only and random caching policies.
U2 - 0.1109/GLOBECOM46510.2021.9685861
DO - 0.1109/GLOBECOM46510.2021.9685861
M3 - Paper
SP - 1
EP - 6
ER -