SegNet: a network for detecting deepfake facial videos

Chia-Mu Yu*, Kang Cheng Chen, Ching Tang Chang, Yen Wu Ti

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations


Recent advancements in artificial intelligence have made the forgery of digital images and videos easy. Deepfake technology uses a deep learning approach to identify and replace faces in images or videos. It can make people distrust digital content, thereby significantly affecting political and social stability. If the sources of the training and test data are different, the existing solutions for identifying forged images can achieve a considerably low accuracy. In many cases, the detection accuracy is significantly lower than 50%. In this study, we propose SegNet, which is a face-forgery-detection method, to determine whether images or videos have been processed using deepfake technology. By focusing on the changes in various regions of an image and ignoring the characteristics of different forgery techniques, SegNet solves the problem of low detection accuracy. SegNet achieves satisfactory detection accuracy using the recently proposed separable convolutional neural networks, ensemble models, and image segmentation. Moreover, we examine the effects of different image-segmentation methods on the detection results. A comprehensive comparison between SegNet and the existing solutions shows the superior detection capability of SegNet.

Original languageEnglish
Pages (from-to)793–814
Number of pages22
JournalMultimedia Systems
Issue number3
StatePublished - Jun 2022


  • Deepfake
  • Video manipulation


Dive into the research topics of 'SegNet: a network for detecting deepfake facial videos'. Together they form a unique fingerprint.

Cite this