Adobe’s VideoGigaGAN upscales blurry video to look 8x sharper

April 25, 2024
  • Adobe researchers developed an AI video upscaler that makes blurry videos up to 8x sharper
  • VideoGigaGAN overcomes the lack of detail, flickering, and aliasing that plague video upscalers
  • There are significant applications for VideoGigaGAN but Adobe hasn’t mentioned a release date

Adobe researchers unveiled VideoGigaGAN, a generative AI model that can upscale blurry videos into crisp, smooth video that looks up to 8x sharper.

We’ve had really good image upscalers for a while now, but making a good video upscaler is exponentially more difficult.

Video Super Resolution (VSR) is the process of taking individual frames of a video, upscaling the resolution and detail, and fitting the frames together to recreate the video.

Doing this well involves solving two conflicting challenges. Current VSRs either generate video that is smooth and blurry, or sharp and glitchy.

Adobe’s VideoGigaGAN upsamples blurry video to produce a video that is both temporally consistent (smooth frame transitions) and has high-frequency details.

Here’s an example of what VideoGigaGAN can do.

As the name suggests, Adobe’s method relies on GigaGAN, an advanced generative adversarial network (GAN).

GANs are great at upsampling images, and GigaGAN is one of the best at image super-resolution. So why not simply use GigaGAN on each frame to upscale the image and then put them all together to make the video?

When Adobe’s researchers tried that they achieved great video resolution but the resulting video was temporally inconsistent and flickered.

By adding temporal convolutional and attention layers to the GigaGAN the temporal inconsistency was fixed but the flickering was still an issue.

VideoGigaGAN addresses this by separating low-frequency and high-frequency elements in each frame and processing these differently.

The low-frequency feature map is smoothed to remove high-frequency details, which can be sources of noise and flickering.

Using Skip connections, the finer details in high-frequency components are retained by bypassing the middle layers in the model that would otherwise be lost in processing.

You can read more about the technical details in Adobe’s paper.

The demos on Adobe’s GitHub are very impressive. Adobe hasn’t hinted at a release date but let’s hope they let us use it soon.

Imagine what a tool like this could do for historical archival footage, classic movies, or even upscaling your favorite old TV shows into HD.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions