Poisoning attacks on face authentication systems by using the generative deformation model

Chak Tong Chan, Szu Hao Huang*, Patrick Puiyui Choy

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Various studies have revealed the vulnerabilities of machine learning algorithms. For example, a hacker can poison a deep learning facial recognition system by impersonating an administrator and obtaining confidential information. According to studies, poisoning attacks are typically implemented based on the optimization conditions of the machine learning algorithm. However, neural networks, because of their complexity, are typically unsuited for these attacks. Although several poisoning strategies have been developed against deep facial recognition systems, poor image qualities and unrealistic assumptions remain the drawbacks of these strategies. Therefore, we proposed a black-box poisoning attack strategy against facial recognition systems, which works by injecting abnormal data generated by using elastic transformation to deform the facial components. We demonstrated the performance of the proposed strategy using the VGGFace2 data set to attack various facial extractors. The proposed strategy outperformed its counterparts in the literature. The contributions of the study lie in 1) providing a novel method of attack against a nonoverfitting facial recognition system with fewer injections, 2) applying a new image transformation technique to compose malicious samples, and 3) formulating a method that leaves no trace of modification to the human eye.

Original languageEnglish
JournalMultimedia Tools and Applications
StateAccepted/In press - 2023


  • Adversarial attack
  • Computer vision
  • Facial recognition
  • Image deformation
  • Information security
  • Poisoning attack


Dive into the research topics of 'Poisoning attacks on face authentication systems by using the generative deformation model'. Together they form a unique fingerprint.

Cite this