Various studies have revealed the vulnerabilities of machine learning algorithms. For example, a hacker can poison a deep learning facial recognition system by impersonating an administrator and obtaining confidential information. According to studies, poisoning attacks are typically implemented based on the optimization conditions of the machine learning algorithm. However, neural networks, because of their complexity, are typically unsuited for these attacks. Although several poisoning strategies have been developed against deep facial recognition systems, poor image qualities and unrealistic assumptions remain the drawbacks of these strategies. Therefore, we proposed a black-box poisoning attack strategy against facial recognition systems, which works by injecting abnormal data generated by using elastic transformation to deform the facial components. We demonstrated the performance of the proposed strategy using the VGGFace2 data set to attack various facial extractors. The proposed strategy outperformed its counterparts in the literature. The contributions of the study lie in 1) providing a novel method of attack against a nonoverfitting facial recognition system with fewer injections, 2) applying a new image transformation technique to compose malicious samples, and 3) formulating a method that leaves no trace of modification to the human eye.