Fake face detection based on a multi discriminator deep CNN architecture (MDD-CNN)


  • Chemesse Ennehar Bencheriet 8 Mai 1945-Guelma University, Computer Science Department/LAIG Laboratory, 24000 Guelma, Algeria
  • Hiba Abdelmoumène 8 Mai 1945-Guelma University, Computer Science Department/LabGED Laboratory, 24000 Guelma, Algeria
  • Abdennour Sebbagh 8 Mai 1945-Guelma University, Department of Electrical Atomatic Engineering/LAIG Laboratory, 24000 Guelma, Algeria
  • Abdennour Yahiyaoui Mai 1945-Guelma University, Computer Science Department, 24000 Guelma, Algeria
  • Zahra Taba 8 Mai 1945-Guelma University, Computer Science Department, 24000 Guelma, Algeria




fake face, real face, discriminator, MDD-CNN architecture adversarial training, transfer learning, deep learning


Due to the robustness of the deep learning tools used to design these applications, fakes are becoming increasingly common as these applications become more widely available and accessible to the general public. These fakes are typically fake faces or even fake people, which are difficult to distinguish from real individuals. Therefore, we need more efficient applications for fraud detection. In this work, we propose a new multi-discriminator architecture to distinguish fake faces from real ones. The architecture consists of three deep networks (discriminators) competing with each other, each trained differently. The final decision is made by voting based on the decisions of the three discriminators. The core element of our architecture is the proposed new adversarial deep network discriminator (NDGAN), which is trained in three different ways, resulting in three distinct discriminators. Discriminator 1 undergoes adversarial training, discriminator 2 is trained using transfer learning, and the third discriminator undergoes supervised training with a standard CNN using examples and counterexamples. Training and testing were performed on 70 000 real faces from the Flickr-Face-HQ (FFHQ) dataset, while 70 000 fake faces were generated using Nvidia’s StyleGAN. The tests conducted on the three networks produced significant results, with accuracy ranging from 79 % to 98 % for fake faces, and from 80 % to 98 % for real faces. The reliability of the discriminators contributes significantly to the overall performance of the multi-discriminator system, achieving an accuracy of 96 % for fake faces and 98 % for real faces.


Download data is not yet available.


C. Bregler, M. Covell, M. Slaney. Video rewrite: Driving visual speech with audio. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 353–360. 1997. https://doi.org/10.1145/258734.258880

T. F. Cootes, G. J. Edwards, C. J. Taylor. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(6):681–685, 2001. https://doi.org/10.1109/34.927467

A. Santha. Deepfakes generation using LSTM based generative adversarial networks. Master Thesis, Rochester Institute of Technology, 2020.

J. Thies, M. Zollhofer, M. Stamminger, et al. Face2Face: Real-time face capture and reenactment of RGB videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2387–2395. 2016. https://doi.org/10.1109/CVPR.2016.262

S. Suwajanakorn, S. M. Seitz, I. Kemelmacher-Shlizerman. Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics 36(4):1–13, 2017. https://doi.org/10.1145/3072959.3073640

M. Westerlund. The emergence of deepfake technology: A review. Technology Innovation Management Review 9(11):39–52, 2019. https://doi.org/10.22215/timreview/1282

F. Marra, D. Gragnaniello, L. Verdoliva, G. Poggi. Do GANs leave artificial fingerprints? In IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 506–511. 2019. https://doi.org/10.1109/MIPR.2019.00103

A. Rössler, D. Cozzolino, L. Verdoliva, et al. FaceForensics++: Learning to detect manipulated facial images. [2023-05-31]. arXiv:1901.08971

Y. Li, X. Yang, P. Sun, et al. Celeb-DF: A large-scale challenging dataset for deepfake forensics. [2023-05-31]. arXiv:1909.12962v4

G. Mahfoudi, B. Tajini, F. Retraint, et al. DEFACTO: Image and face manipulation dataset. In 27th European Signal Processing Conference (EUSIPCO), pp. 1–5. 2019. https://doi.org/10.23919/EUSIPCO.2019.8903181

P. Zhou, X. Han, V. I. Morariu, L. S. Davis. Two-stream neural networks for tampered face detection. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1831–1839. 2017. https://doi.org/10.1109/CVPRW.2017.229

Z. Guo, G. Yang, J. Chen, X. Sun. Fake face detection via adaptive manipulation traces extraction network. Computer Vision and Image Understanding 204:103170, 2021. https://doi.org/10.1016/j.cviu.2021.103170

Z. Alom, T. M. Taha, C. Yakopcic, et al. A state-of-the-art survey on deep learning theory and architectures. Electronics 8(3):292, 2019. https://doi.org/10.3390/electronics8030292

K. G. Kim. Book review: Deep learning. Healthcare Informatics Research 22(4):351–354, 2016. https://doi.org/10.4258/hir.2016.22.4.351

D. Güera, E. J. Delp. Deepfake video detection using recurrent neural networks. In 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. 2018. https://doi.org/10.1109/AVSS.2018.8639163

I. Laptev, M. Marszalek, C. Schmid, B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. 2008. https://doi.org/10.1109/CVPR.2008.4587756

Z. Liu, X. Qi, P. H. S. Torr. Global texture enhancement for fake face detection in the wild. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8057–8066. 2020. https://doi.org/10.1109/CVPR42600.2020.00808

T. Karras, S. Laine, T. Aila. A style-based generator architecture for generative adversarial networks. [2023-05-31]. arXiv:1812.04948

T. Karras, T. Aila, S. Laine, J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. [2023-05-31]. arXiv:1710.10196

A. Deshmukh, S. B. Wankhade. Deepfake detection approaches using deep learning: A systematic review. In Intelligent Computing and Networking. Lecture Notes in Networks and Systems, vol. 146. 2021. https://doi.org/10.1007/978-981-15-7421-4_27

S. Albawi, T. A. Mohammed, S. Al-Zawi. Understanding of a convolutional neural network. In International Conference on Engineering and Technology (ICET), pp. 1–6. 2017. https://doi.org/10.1109/ICEngTechnol.2017.8308186

T. Jung, S. Kim, K. Kim. DeepVision: Deepfakes detection using human eye blinking pattern. IEEE Access 8:83144–83154, 2020. https://doi.org/10.1109/ACCESS.2020.2988660

P. Korshunov, S. Marcel. Vulnerability assessment and detection of deepfake videos. In International Conference on Biometrics (ICB), pp. 1–6. 2019. https://doi.org/10.1109/ICB45273.2019.8987375

O. Parkhi, A. Vedaldi, A. Zisserman. Deep face recognition. In British Machine Vision Conference, pp. 1–12. 2015. https://doi.org/10.1109/ICB45273.2019.8987375

F. Schroff, D. Kalenichenko, J. Philbin. FaceNet: A unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823. 2015. https://doi.org/10.1109/CVPR.2015.7298682

Y. Li, S. Lyu. Exposing DeepFake videos by detecting face warping artifacts. [2023-05-31]. arXiv:1811.00656

X. Yang, Y. Li, S. Lyu. Exposing deep fakes using inconsistent head poses. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8261–8265. 2019. https://doi.org/10.1109/ICASSP.2019.8683164

A. Chintha, B. Thai, S. J. Sohrawardi, et al. Recurrent convolutional structures for audio spoof and video deepfake detection. IEEE Journal of Selected Topics in Signal Processing 14(5):1024–1037, 2020. https://doi.org/10.1109/JSTSP.2020.2999185

F. Chollet. Xception: Deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807. 2017. https://doi.org/10.1109/CVPR.2017.195

S. Lee, S. Tariq, Y. Shin, S. S. Woo. Detecting handcrafted facial image manipulations and gan-generated facial images using shallow-fakefacenet. Applied Soft Computing 105:107256, 2021. https://doi.org/10.1016/j.asoc.2021.107256

I. J. Goodfellow, J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. [2023-05-31]. arXiv:1412.6572

A. Radford, L. Metz, S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. [2023-05-31]. arXiv:1511.06434

A. Mangal, N. Kumar. Using big data to enhance the bosch production line performance: A Kaggle challenge. In IEEE International Conference on Big Data (Big Data), pp. 2029–2035. 2016. https://doi.org/10.1109/BigData.2016.7840826

T. Carneiro, R. V. Medeiros Da NóBrega, T. Nepomuceno, et al. Performance analysis of Google colaboratory as a tool for accelerating deep learning applications. IEEE Access 6:61677–61685, 2018. https://doi.org/10.1109/ACCESS.2018.2874767




How to Cite

Bencheriet, C. E., Abdelmoumène, H., Sebbagh, A., Yahiyaoui, A., & Taba, Z. (2023). Fake face detection based on a multi discriminator deep CNN architecture (MDD-CNN). Acta Polytechnica, 63(5), 305–319. https://doi.org/10.14311/AP.2023.63.0305