How Deepfake Video Actually Gets Made in 2026
Modern deepfake video uses face-swapping models, full-body generation, and voice synthesis either separately or combined. Face-swap deepfakes overlay a target person's face onto a source video. Full-generation deepfakes create a person entirely from scratch in a scene. The second category is harder to detect because there's no real underlying video to compare against - the whole thing is synthesised.
Tools like Runway, Kling AI, Sora, and several open-source models can generate realistic video from text prompts or reference images. The quality varies significantly by tool and by how much compute was used in generation. A cheap, fast generation looks worse than a careful, high-resource generation. But even the cheaper outputs fool people who aren't specifically looking for tells.
Visual Tells That Still Exist
The eyes are the starting point. Real eyes have irregular, random blinking patterns. Deepfake models often produce blink rates that are slightly too regular or too infrequent. Watch the eyes through the full video - does the blinking feel natural or does it have an almost mechanical cadence? This is harder to spot in short clips but more obvious in longer videos.
Skin texture is another signal. Real facial skin has pores, fine lines, variable texture across the face. Deepfake faces often have an airbrush quality - the skin looks slightly too smooth, too consistent, like a high-end skin filter that wasn't quite dialled back far enough. The forehead and cheeks especially tend to lose texture that real skin has.
Hair and hairline edges are persistently difficult for AI to generate convincingly. Individual hairs, flyaways, and the transition between hair and background don't behave physically correctly in many generated videos. If the hair looks slightly too perfect, slightly too still, or the hairline edge against the background has a subtle shimmer or wavering quality, that's worth noting.
Ear rendering. Ears are complex geometric shapes and AI models have historically struggled with them. Earrings especially - their physics, how they move relative to the head, whether the depth relationships look physically plausible - are a detail worth examining.
Background Inconsistencies
The background in deepfake videos often shows artifacts that the central subject doesn't. Object edges near the face sometimes flicker or have unnatural transitions. If someone walks past behind the subject, their motion might not integrate correctly with the depth of the scene. Reflective surfaces - windows, glasses, shiny objects - may not reflect the scene consistently or may show impossible reflections.
Lighting coherence is harder to maintain in generated video. Does the light on the subject's face match the apparent light sources in the environment? Shadows that fall in the wrong direction, lighting that doesn't change when the subject moves relative to a light source, or facial lighting that doesn't match the ambient scene lighting are all signals.
Audio Sync and Voice Quality
Lip sync has gotten much better in 2026. Earlier deepfakes had obvious sync problems where mouth movements didn't match words. The current generation of tools is much more accurate, but subtle errors persist. Watch a suspected deepfake with attention to the lower face - do the jaw movements, the specific way the lips form consonant and vowel sounds, match what you're hearing? Extended speech gives you more data points to work with than short clips.
Voice quality at audio level: cloned voices sometimes have subtle artifacts in consonants, particularly plosives (p, b, t, d sounds) and sibilants (s, sh). The attack and decay of these sounds can be slightly wrong. Listening on headphones with the video paused on the audio alone sometimes reveals this more clearly than watching the video normally.
Detection Tools Available in 2026
Several tools exist for automated deepfake detection. Microsoft's Video Authenticator was an early entry - it analyses faces at pixel level for blending artifacts. Sensity AI provides a commercial deepfake detection API used by media organisations and platforms. Reality Defender has expanded its coverage to include video alongside audio detection. Hive Moderation has a deepfake detection product used in content moderation pipelines.
None of these are perfectly reliable. Detection models chase generation models, and the generation side tends to advance faster. A detection tool trained on data from six months ago may miss techniques developed since then. Use these tools as one signal, not a definitive verdict.
The "Liar's Dividend" Problem
There's a secondary effect of deepfakes that gets less attention: because deepfakes exist, real videos can now be dismissed as fake. Someone caught on camera doing something they shouldn't can claim the video is AI-generated. This is called the "liar's dividend" - the mere existence of deepfake technology creates plausible deniability for real footage.
Content provenance systems are developing to address this. The C2PA (Coalition for Content Provenance and Authenticity) standard allows cameras and software to embed cryptographic signatures in media files showing where the content originated and what edits were made. Some camera manufacturers are implementing this. The goal is a chain of custody for media that makes provenance verifiable. Adoption is still early but the framework exists.
Your Practical Response
Slow down. Deepfakes are most effective when they're consumed quickly and emotionally. If a video makes a significant claim or shows something surprising, specifically from an account you don't know well, take thirty seconds to look for corroboration before sharing or responding. Does any credible outlet report this? Is this the only source?
Check the account history and context. Deepfake content is often posted by accounts with limited history, operating under the cover of a breaking news moment, or on platforms where content can be posted and shared before moderation catches it.
For saving video content you want to examine offline or submit for analysis, MyVideoCity downloads from all major platforms. Our guide on AI voice cloning covers the audio deepfake side, which often accompanies video manipulation.