Poisoning attacks on face authentication systems by using the generative deformation model

Chak Tong Chan, Szu Hao Huang*, Patrick Puiyui Choy


研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)


Various studies have revealed the vulnerabilities of machine learning algorithms. For example, a hacker can poison a deep learning facial recognition system by impersonating an administrator and obtaining confidential information. According to studies, poisoning attacks are typically implemented based on the optimization conditions of the machine learning algorithm. However, neural networks, because of their complexity, are typically unsuited for these attacks. Although several poisoning strategies have been developed against deep facial recognition systems, poor image qualities and unrealistic assumptions remain the drawbacks of these strategies. Therefore, we proposed a black-box poisoning attack strategy against facial recognition systems, which works by injecting abnormal data generated by using elastic transformation to deform the facial components. We demonstrated the performance of the proposed strategy using the VGGFace2 data set to attack various facial extractors. The proposed strategy outperformed its counterparts in the literature. The contributions of the study lie in 1) providing a novel method of attack against a nonoverfitting facial recognition system with fewer injections, 2) applying a new image transformation technique to compose malicious samples, and 3) formulating a method that leaves no trace of modification to the human eye.

期刊Multimedia Tools and Applications
出版狀態Accepted/In press - 2023


深入研究「Poisoning attacks on face authentication systems by using the generative deformation model」主題。共同形成了獨特的指紋。