Upload & Analyze
📤
Drag & drop a file here
or
Video & Image
Preview: images only
Tip: use short clips
Results
History
API Documentation
Use these examples to integrate Deepfake Analyzer into your app. Toggle auth, generate a user API key, and try a quick health check below.
cURL (video)
JavaScript (fetch)
Node (axios)
Python (requests)
Credits Service
About
Deepfake Analyzer is a research-grade demo that blends modern transformer-based audiovisual modeling with classical image processing to help assess media authenticity. It visualizes what the model focuses on, how audio aligns with video, and how sharp (or blurry) frames are—so you can understand the “why” behind a verdict.
Techniques used
- AV Transformer (Model): Cross-modal attention across audio and video streams
- Grad-CAM Heatmaps: Highlights regions that influence the model decision
- Mel Spectrogram & Waveform: Acoustic context for speech-driven cues
- AV Sync Series: Correlation of audio RMS vs mouth openness
- Laplacian Variance: Sharpness-based heuristic to flag defocus or smoothing artifacts
About this project
- Built as part of Major Project - 1 by students of Artificial Intelligence & Data Science, 7th semester
- College: P.I.E.M.R., Indore
- Deployable with Docker and AWS (Cognito, S3)
- Local guest mode supported; history can persist when storage is configured
Settings
Terms & Conditions
By using this service you agree to the following:
- Uploaded content may be processed to produce visual explanations and metrics.
- When S3 is enabled, artifacts are stored under a user-specific path with presigned-link access.
- Models are probabilistic; use results as guidance, not absolute truth.
- Do not upload unlawful or copyrighted material you do not own.