Eff-YNet: A Dual Task Network for DeepFake Detection and Segmentation
Publication Date
1-4-2021
Document Type
Conference Proceeding
Publication Title
Proceedings of the 2021 15th International Conference on Ubiquitous Information Management and Communication, IMCOM 2021
DOI
10.1109/IMCOM51814.2021.9377373
Abstract
Advances in generative models and manipulation techniques have given rise to digitally altered videos known as deepfakes. These videos are difficult to identify for both humans and machines. Modern detection methods exploit various weaknesses in deepfake videos, such as visual artifacts and inconsistent posing. In this paper, we describe a novel architecture called Eff-YNet designed to detect visual differences between altered and unaltered areas. The architecture combines an EfficientNet encoder and a U-Net with a classification branch into a model capable of both classifying and segmenting deepfake videos. The task of segmentation helps train the classifier and also produces useful segmentation masks. We also implement ResNet 3D to detect spatiotemporal inconsistencies. To test these models, we run experiments against the Deepfake Detection Challenge dataset and show improvements over baseline classification models. Furthermore, we find that an ensemble of these two approaches improves performance over a single approach alone.
Keywords
computer vision, deep learning, Deepfake detection, image classification, image segmentation, U-Net
Department
Computer Science
Recommended Citation
Eric Tjon, Melody Moh, and Teng Sheng Moh. "Eff-YNet: A Dual Task Network for DeepFake Detection and Segmentation" Proceedings of the 2021 15th International Conference on Ubiquitous Information Management and Communication, IMCOM 2021 (2021). https://doi.org/10.1109/IMCOM51814.2021.9377373