TY - JOUR
T1 - Probabilistic Byzantine Attack on Federated Learning
AU - Wang, Tsung Hsuan
AU - Chen, Po Ning
AU - Huang, Yu Chih
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - In this paper, motivated by the severe effects of black-box evasion attacks on machine learning, we investigate the vulnerability of Byzantine attacks to federated learning (FL) systems. Existing studies predominantly evaluate their defense strategies using monotonous Byzantine attacks in the training stage, which fail to consider the public dataset’s characteristics. This oversight may undermine the confidence in Byzantine defense strategies. In this work, we investigate the issue from the perspective of a Byzantine attacker instead of focusing on mitigate Byzantine attacks as a system designer. Adopting a specific learning task as example, we examine it using an optimal probabilistic Byzantine attack policy, which we extend from the research scope introduced in [12]. Specifically, we determine the minimum Byzantine effort required to manipulate the sample distribution in the testing stage to given Byzantine sample distributions. Then, we derived the optimal and near-optimal Byzantine sample distributions subject to a fixed compromising effort. Additionally, a closed-form expression of optimal weights for FL is obtained, via which a connection between the optimal weights and those obtained from the FL training can be established. Through numerical experiments, we confirm the effectiveness of the proposed probabilistic Byzantine attack, which can serve as a good test to anti-attack defense strategies.
AB - In this paper, motivated by the severe effects of black-box evasion attacks on machine learning, we investigate the vulnerability of Byzantine attacks to federated learning (FL) systems. Existing studies predominantly evaluate their defense strategies using monotonous Byzantine attacks in the training stage, which fail to consider the public dataset’s characteristics. This oversight may undermine the confidence in Byzantine defense strategies. In this work, we investigate the issue from the perspective of a Byzantine attacker instead of focusing on mitigate Byzantine attacks as a system designer. Adopting a specific learning task as example, we examine it using an optimal probabilistic Byzantine attack policy, which we extend from the research scope introduced in [12]. Specifically, we determine the minimum Byzantine effort required to manipulate the sample distribution in the testing stage to given Byzantine sample distributions. Then, we derived the optimal and near-optimal Byzantine sample distributions subject to a fixed compromising effort. Additionally, a closed-form expression of optimal weights for FL is obtained, via which a connection between the optimal weights and those obtained from the FL training can be established. Through numerical experiments, we confirm the effectiveness of the proposed probabilistic Byzantine attack, which can serve as a good test to anti-attack defense strategies.
KW - Byzantine attack
KW - deep neural networks
KW - distributed learning
KW - federated learning.
UR - https://www.scopus.com/pages/publications/105004072287
U2 - 10.1109/TSP.2025.3564842
DO - 10.1109/TSP.2025.3564842
M3 - Article
AN - SCOPUS:105004072287
SN - 1053-587X
VL - 73
SP - 1823
EP - 1838
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -