PUDD: Towards Robust Multi-modal Prototype-based Deepfake Detection


Prototype learning-based image classification with original images.

Alvaro Lopez Pellcier, Yi Li and Plamen Angelov have had a paper accepted for the Computer Vision and Pattern Recognition 2024 (CVPR) conference in Seattle. The paper is titled PUDD: Towards Robust Multi-modal Prototype-based Deepfake Detection.

Deepfake techniques generate highly realistic data, making it challenging for humans to discern between actual and artificially generated images. Recent advancements in deep learning-based deepfake detection methods, particularly with diffusion models, have shown remarkable progress. However, there is a growing demand for real-world applications to detect unseen individuals, deepfake techniques, and scenarios. To address this limitation, we propose a Prototype-based Unified Framework for Deepfake Detection (PUDD). PUDD offers a detection system based on similarity, comparing input data against known prototypes for video classification and identifying potential deepfakes or previously unseen classes by analyzing drops in similarity. Our extensive experiments reveal three key findings: (1) PUDD achieves an accuracy of 95.1% on Celeb-DF, outperforming state-of-the-art deepfake detection methods; (2) PUDD leverages image classification as the upstream task during training, demonstrating promising performance in both image classification and deepfake detection tasks during inference; (3) PUDD requires only 2.7 seconds for retraining on new data and emits 105 times less carbon compared to the state-of-the-art model, making it significantly more environmentally friendly.

Back to News