Part 1 — AI Video Generators: What's Free, What's Good, and What's Still a Mess
Everyone's heard about AI generating video now. Half the claims you've seen are overstated. The other half are genuinely impressive if you know what you're doing. I've tested most of the major ones as of 2026, and here's the honest breakdown.
The core problem with AI video generators right now: they're very good at short clips and very bad at anything coherent over ten seconds. Motion consistency, physics, hands — all still genuinely difficult. That said, for specific use cases they're already useful enough to matter.
Runway Gen-3 Alpha — The Best Quality, But Not Really Free
Runway is what most professional video creators use when they need AI-generated footage. Gen-3 Alpha produces the cleanest results of any tool I've seen — smooth motion, reasonable physics, good detail. The free tier gives you 125 credits when you sign up, which is enough to generate maybe 5–8 short clips before it runs out. After that it's a subscription.
Is it worth the cost? If you're generating footage for a professional project, yes. For casual experimentation, probably not. The free trial is worth using just to see what the current ceiling of this technology actually looks like.
What it's good for: product shots, abstract visual backgrounds, short dramatic clips. What it can't do: anything with consistent human characters across multiple shots, accurate text in frame, or realistic hands. Nobody can do those reliably yet.
Kling AI — The One That Surprised Me
Kling comes from a Chinese company called Kuaishou, and when I first saw it I expected the usual — impressive demo clips that fall apart when you try to use it yourself. It didn't. The motion quality, especially for camera movements and fluid dynamics, is genuinely competitive with Runway at a significantly lower price point. The free tier is more generous than most.
The interface is in Chinese by default, which throws people off. There's an international version. Use that. It's the same model, better English support.
For anything involving water, fire, or cinematic camera sweeps — Kling outperforms tools twice its price. Anything with faces up close is where it struggles.
Pika Labs — Fast, Easy, Free Enough
Pika is the most accessible of the serious generators. Sign in with Google, describe what you want, get a clip in about a minute. The quality is a step below Runway and Kling, but the speed and ease of use make it practical for quick iterations.
Where Pika actually shines: animating still images. You upload a photo and describe how you want it to move — and it handles this surprisingly well. A landscape photo with moving clouds and water. A portrait where the subject subtly looks up. For this specific use case, Pika is arguably the best free option available right now.
OpenAI Sora — Still Mostly Locked
Sora is what everyone talked about when OpenAI released the demo clips in early 2024. The clips were genuinely stunning. The product rollout has been, to put it charitably, slow. Access expanded to ChatGPT Pro subscribers, then wider — but even in 2026 it still has usage caps and isn't the freely available revolution the demos implied.
The quality in the demos is real. The availability is not what was implied. If you have ChatGPT Pro, try it. If you don't, Runway and Kling are more useful right now.
Stable Video Diffusion — Free, Open Source, Needs a Decent Computer
If you want to run AI video generation completely locally with no credits, no subscriptions, and no data going to any company's servers — Stable Video Diffusion from Stability AI is the option. It's open source, runs on your own GPU, and costs nothing beyond your hardware.
The catch: you need a reasonably modern NVIDIA GPU with at least 8GB VRAM. Setup requires comfort with command-line tools. The output quality is behind Runway and Kling. But for privacy, for offline use, and for unlimited generation at zero cost — it's the only realistic option.
ComfyUI is the most approachable interface for running Stable Video Diffusion without spending an hour in a terminal. Worth the setup time if you plan to use this seriously.
Part 2 — Fixing Old Blurry Videos With AI Upscaling
This is the part of AI video that I think is actually more useful to more people than video generation. Everyone has old footage that's embarrassing to watch now — grainy camcorder clips from the early 2000s, pixelated phone videos from 2009, VHS transfers that look like they were recorded through a foggy window. AI upscaling can genuinely rescue these.
Not perfectly. Expectations matter here. AI upscaling doesn't recover information that wasn't there — it intelligently fills in detail that plausibly should have been there. The result usually looks significantly better, occasionally looks weird in specific frames, and rarely looks like true high-resolution footage. But "significantly better" is still worth a lot when the alternative is unwatchable.
Topaz Video AI — The Best Result, Not Free
Topaz Video AI is the professional standard for video upscaling. It's a desktop application, not a web tool, and it costs around $299 as a one-time purchase. For people with a large library of old footage worth preserving, this is money well spent. For occasional use, the cost is hard to justify.
What it does well: upscaling SD footage to 1080p, stabilising shaky footage, removing interlacing from VHS transfers, frame interpolation to smooth choppy old video. The processing is slow — a 30-minute video can take several hours depending on your computer's GPU. But the results are consistently better than any free alternative I've tested.
They offer a free trial with a watermark on output. Worth running your most important clips through before committing to a purchase.
DaVinci Resolve — Genuinely Free, Genuinely Good
DaVinci Resolve is a full professional video editor that happens to be free. Most people don't use it for AI upscaling because they don't realise it has built-in AI enhancement tools. The Super Scale feature upscales footage using AI processing, and it's built directly into the free version — no subscription, no credits.
It's not as powerful as Topaz Video AI. But for footage that's only slightly degraded — maybe 480p that you want at 1080p — it's more than adequate and costs nothing. The learning curve for Resolve is real, but there are countless tutorials. If you don't already have Topaz and aren't ready to buy it, Resolve is where I'd start.
CapCut — For Mobile, Surprisingly Capable
CapCut is primarily a mobile video editor, but it has AI enhancement features built in that work better than you'd expect for a free app. Upload a low-resolution clip, use the "Enhance" or "Remaster" function, and it runs AI processing to sharpen and upscale the footage.
The results won't match Topaz. But for a quick fix on your phone without installing desktop software, it's genuinely useful. Old family videos, clips you want to repost, anything where "noticeably better" matters more than "technically perfect" — CapCut handles this fine.
Flowframes — Free Frame Interpolation
Flowframes does one specific thing: it adds frames to video to make it smoother. Old footage shot at 15fps or 24fps can be interpolated up to 60fps, which makes it feel significantly more modern and watchable. It's free, open source, and runs locally on Windows.
Frame interpolation isn't the same as resolution upscaling — Flowframes makes motion smoother, not sharper. Combined with a tool like Resolve for the resolution and Flowframes for the frame rate, old footage can come out looking dramatically better than the source. I've used this combination on 2005 family camcorder footage and the result was genuinely better than I expected.
Part 3 — How to Tell When a Video Is AI-Generated or Deepfaked
This matters more every month. AI-generated video has gone from obviously fake to genuinely convincing in under two years. Knowing what to look for isn't paranoia — it's just being a literate consumer of online content.
The tells are getting subtler, but they haven't disappeared entirely. Here's what I look for.
Eyes and Blinking
AI face generation has historically struggled with eyes. Not the appearance of eyes, but the behaviour. Real people blink irregularly — sometimes frequently, sometimes not for a while. AI-generated faces often blink at suspicious intervals, or produce a slightly glazed, static quality between blinks. When you watch a video and something about the face feels slightly off without being able to name it, eyes are usually the first thing to examine more carefully.
Pupil dilation is another tell. In a video where lighting changes, real pupils respond. AI sometimes misses this or applies it inconsistently between frames.
Hair and Teeth
Hair remains one of the hardest things for AI to render consistently. Individual strands at the edges of the frame, hair moving against a complex background, or fine detail at the hairline — these are where current AI generators still produce noticeable artifacts. Zoom into the hairline of any AI face and look for blurring, smearing, or inconsistent strand behaviour during movement.
Teeth are similar. In a static image, AI handles teeth reasonably well. In a video where the person speaks or smiles dynamically, watch for teeth that appear to shift, merge, or change shape slightly between frames. It's subtle, but once you see it you can't unsee it.
Lip Sync
A video where AI is speaking words it didn't originally say — a deepfake rather than a generated video — often has lip sync issues. The mouth movements don't quite match the audio cadence. Consonants like P, B, and M require specific lip shapes that AI still sometimes misses or delays. If the mouth movement feels like watching a dubbed foreign film, that's worth noting.
This is less reliable than it used to be — dedicated lip-sync AI has improved significantly — but it's still a useful check on lower-budget fakes.
Background Inconsistency
AI video generators often produce backgrounds that are visually plausible but physically incoherent. Objects that appear and disappear between cuts, text that's garbled or morphs slightly, furniture that shifts position. The subject is usually the most coherent part of an AI video — the background gets less attention from the model.
For deepfakes — where a real background exists but a face has been replaced — look for lighting mismatches between the subject and their environment. If the light seems to be coming from different directions on the person's face versus the room around them, that's a significant indicator.
Physics and Motion
AI still doesn't have a great intuitive understanding of physics. Fabric that moves wrong. Hair that flows in a direction inconsistent with the apparent wind. Hands — always hands. AI-generated hands are notorious and remain a reliable tell even now. Count fingers. Check joint positioning. Watch how the hands move relative to objects they're supposedly holding or touching.
Liquid is another consistent weakness. Water, coffee, wine — the way fluid moves and reflects in AI video still has a characteristic quality that's different from real footage. It's not always obvious, but if you're suspicious of a video and it contains liquid, look at it carefully.
Tools That Can Help
Manual inspection isn't always practical. A few tools exist specifically for detection. Intel's FakeCatcher claims to detect deepfakes by analysing blood flow patterns — subtle colour changes in facial skin that real humans produce but AI doesn't. Reality Defender is a professional-grade platform used by newsrooms. Deepware Scanner is a free option that works reasonably well on obvious fakes.
None of these are foolproof. A sophisticated deepfake made with current top-tier tools can fool detectors as well as humans. The detection arms race is real and ongoing. The practical advice isn't "use a tool and trust the result" — it's "be appropriately skeptical of emotionally charged video content from unfamiliar sources, especially when it conveniently confirms something you already believe."
That last part is the actual protection. Not the tools.
For more on how AI is reshaping video more broadly — from recommendation algorithms to compression — see our full AI and video guide. And if you're saving videos from social platforms for archival purposes, MyVideoCity handles that without signup or tracking.