TY - JOUR
T1 - Communication-Efficient Federated DNN Training
T2 - Convert, Compress, Correct
AU - Chen, Zhong Jing
AU - Hernandez, Eduin E.
AU - Huang, Yu Chih
AU - Rini, Stefano
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024
Y1 - 2024
N2 - In the federated training of a deep neural network (DNN), model updates are transmitted from the remote users to the parameter server (PS). In many scenarios of practical relevance, one is interested in reducing the communication overhead to enhance training efficiency. To address this challenge, we introduce CO3. CO3 takes its name from three processing applied which reduce the communication load when transmitting the local DNN gradients from the remote users to the PS. Namely, 1) gradient quantization through floating-point conversion; 2) lossless compression of the quantized gradient; and 3) correction of quantization error. We carefully design each of the steps above to ensure good training performance under a constraint on the communication rate. In particular, in steps 1) and 2), we adopt the assumption that DNN gradients are distributed according to a generalized normal distribution, which is validated numerically in this article. For step 3), we utilize an error feedback with a memory decay mechanism to correct the quantization error introduced in step 1). We argue that the memory decay coefficient -similar to the learning rate - can be optimally tuned to improve convergence. A rigorous convergence analysis of the proposed CO3 with stochastic gradient descent (SGD) is provided. Moreover, with extensive simulations, we show that CO3 offers improved performance as compared with existing gradient compression schemes proposed in the literature which employ sketching and nonuniform quantization of the local gradients.
AB - In the federated training of a deep neural network (DNN), model updates are transmitted from the remote users to the parameter server (PS). In many scenarios of practical relevance, one is interested in reducing the communication overhead to enhance training efficiency. To address this challenge, we introduce CO3. CO3 takes its name from three processing applied which reduce the communication load when transmitting the local DNN gradients from the remote users to the PS. Namely, 1) gradient quantization through floating-point conversion; 2) lossless compression of the quantized gradient; and 3) correction of quantization error. We carefully design each of the steps above to ensure good training performance under a constraint on the communication rate. In particular, in steps 1) and 2), we adopt the assumption that DNN gradients are distributed according to a generalized normal distribution, which is validated numerically in this article. For step 3), we utilize an error feedback with a memory decay mechanism to correct the quantization error introduced in step 1). We argue that the memory decay coefficient -similar to the learning rate - can be optimally tuned to improve convergence. A rigorous convergence analysis of the proposed CO3 with stochastic gradient descent (SGD) is provided. Moreover, with extensive simulations, we show that CO3 offers improved performance as compared with existing gradient compression schemes proposed in the literature which employ sketching and nonuniform quantization of the local gradients.
KW - Deep neural network (DNN) training
KW - error feedback (EF)
KW - federated learning (FL)
KW - gradient compression
KW - gradient modeling
UR - http://www.scopus.com/inward/record.url?scp=85204129656&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2024.3456857
DO - 10.1109/JIOT.2024.3456857
M3 - Article
AN - SCOPUS:85204129656
SN - 2327-4662
VL - 11
SP - 40431
EP - 40447
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 24
ER -