Why Old Video Looks the Way It Does
Old footage looks bad for a few overlapping reasons, and they each respond differently to AI treatment. Understanding this changes what you expect from any tool you try.
Resolution is the most obvious problem. A 480p video has fewer pixels than your screen can display, so the device stretches the image to fill the frame. That creates the soft, blocky look everyone recognises. Not much the original filmmaker could do about it - that was just what cameras recorded then.
Compression artifacts are a different beast. Every platform that has ever hosted video compresses it to save storage and bandwidth costs. This compression discards visual information - sometimes a lot of it. You see the result as blocky areas in parts of the frame with movement, strange smearing, or weird halos around edges. If the same video got re-uploaded multiple times over the years, those artifacts stack. Each generation of compression makes them worse.
Then there's sensor noise - the visual static that early phone cameras produced in anything less than bright light. Random pixel variation across the frame that looks like grain or interference. It makes footage look older than it is, and it makes compression significantly worse because the encoder can't find clean patterns to work with.
AI tools try to address all three. Why they work better than old-school upscaling comes down to training data. Traditional upscaling just stretches pixels mathematically. AI models have been trained on enormous datasets of paired low-quality and high-quality footage, so they've learned what a sharp edge actually looks like, what skin texture should be, what fabric and foliage look like at higher resolutions. They're making informed guesses rather than blind ones.
The Temporal Problem Nobody Talks About
Here's what makes video enhancement harder than photo enhancement: consistency between frames.
Still image AI can do whatever it wants to each pixel without consequences. Video can't. If the AI decides to render a detail differently from one frame to the next, you get shimmer - that uncomfortable flickering you sometimes see in enhanced footage where edges or textures seem to vibrate during motion. It looks completely unnatural and is often worse than the original blur.
Good video AI tools process frames in groups, looking at what came before and after each frame before deciding how to enhance it. This keeps the result temporally consistent. It's also why these tools are slow and GPU-intensive. Processing fifteen seconds of footage can take several minutes on decent hardware. Plan accordingly.
Free Tools That Are Actually Worth Your Time
Topaz Video AI is the professional benchmark. It runs locally on your machine, handles GPU acceleration well, and produces genuinely impressive results on clean source material. It also costs around $300 for a perpetual license. For one video you care about, that math doesn't work. For a whole library of old footage, it might.
For free options, Enhancr is the first thing to try. It's online, processes clips server-side, and offers a handful of free conversions without requiring payment details. Works best on clips under a few minutes. The quality on clean source footage is solid. There are daily limits on the free tier, but for most personal use cases that's fine.
DaVinci Resolve is free professional editing software that includes an AI Super Scale feature in its Color page. It's not as sophisticated as a dedicated upscaling tool, but it's free, it runs locally with no file size restrictions, and if you're editing the footage anyway it fits naturally into the workflow. Many people sleep on this option.
Real-ESRGAN is an open-source model that runs locally. There are GUI wrappers that make it accessible without command-line knowledge. It produces excellent results and costs nothing. You do need a capable GPU and a bit of patience with the setup, but if you're technically comfortable it's the most flexible free option available.
The Sequence Matters: Clean Before You Scale
One mistake people make is upscaling first and then trying to remove artifacts. Don't do this. When you enlarge footage before cleaning it, you enlarge the artifacts too. They become more obvious, harder to remove, and the final result looks worse than if you'd done it the other way around.
Good tools do it in the right order automatically: noise reduction first, artifact removal second, upscaling third. If you're using a tool that lets you chain processing steps manually, follow that sequence. It makes a real difference to the final output.
Start With the Best Copy You Can Get
If the footage currently lives on a platform - uploaded to Facebook years ago, shared on Instagram, sitting on Vimeo - download the highest quality version available before processing it. AI enhancement has a ceiling, and that ceiling is set by whatever you feed it. Starting from a compressed stream when a better copy exists is leaving quality on the table.
MyVideoCity handles downloads from TikTok, Instagram, Facebook, X, and Vimeo. Paste the link, select the highest resolution the platform offers, download it. That file is your source material. The better it is going in, the better it comes out the other side.
What to Realistically Expect
Clean footage shot in decent light, upscaled from 480p to 1080p, can look genuinely impressive. The kind of result where you find yourself thinking it looks like it was shot at 1080p, just on an older camera. Details that felt absent become visible. The overall impression shifts from "old footage" to "footage from a different era."
Heavily compressed footage is a different story. If large sections of the frame got turned into uniform color blocks by aggressive compression, there's no original detail underneath to recover. AI can make plausible guesses about what might be there, and sometimes those guesses look fine. But you're looking at synthesized pixels, not restored ones. That's an important distinction if accuracy matters to you.
Motion blur is also stubbornly hard to fix. Blurry footage from a camera moving fast, or a subject moving fast, doesn't respond to sharpening the way soft-focus footage does. Sharpening motion blur tends to produce smearing artifacts. There are specialized deblur models, but they're inconsistent.
My honest recommendation: test on a thirty-second clip before committing to a multi-hour render. Every piece of source material responds differently, and knowing what you're going to get before you invest the time is just good sense.
For more on video quality and formats, see the Video Formats and Quality Guide and the full AI Video Tools Guide.