Researchers propose AI models that can improve the quality of any video

Researchers are increasingly using AI to convert historical lenses into high-resolution, high-frame-rate videos that look like they were shot with modern devices. To simplify the process, researchers at the University of Rochester, Northeastern University and Purdue University recently proposed a framework that generates high-resolution, slow-motion video from low-frame-rate, low-resolution video.

According to the team, the “Space-time Video Super-Resolution” (STVSR) algorithm they used was not only better in image quality than existing methods, but also three times faster than previous arecent models.

In a sense, the framework is a step up from Nvidia’s AI model for video processing, which was released in 2018, when Nvidia’s AI model could be slow-motion for any video application. It is understood that similar high-resolution technology has been applied to the field of video games. Last year, Final Fantasy users used a software called AI Gigapixel , which costs $100, to improve the background resolution of Final Fantasy VII.

Researchers propose AI models that can improve the quality of any video

Researchers propose AI models that can improve the quality of any video

Specifically, STVSR learns both time interpolation (how to synthesize non-existent intermediate video frames between original frames) and spatial super-resolution (how to reconstruct high-resolution frames from the corresponding reference frames and their adjacent frames); It combines high-resolution slow-motion video by using video context and time alignment to reconstruct frames from aggregated features.

The researchers trained STVSR using a dataset of more than 60,000 7-frame clips from Vimeo and used a separate evaluation corpus to divide the data sets into fast-motion, normal-motion, and slow-motion sets to measure performance under a variety of conditions. In the experiment, they found that STVSR made significant improvements in fast-motion videos, including challenging action videos, such as those of basketball players moving quickly on the court. The AI model has more accurate image structure and less fuzzy artifacts, while at the same time being four times smaller and at least twice as fast as the baseline model,media reported.

“With this single-stage design, our network can well explore the intrinsic link between time interpolation and spatial super-resolution in a mission,” the co-authors of the preprinted paper wrote. “It enables our models to learn adaptively to mitigate large motion problems with useful local and global time contexts. Numerous experiments have shown that our framework is more efficient and efficient than existing AI models, and that the proposed feature time interpolation network slots and deformable models can handle very challenging fast motion videos. “

According tomedia reports, the project researchers intend to release the source code this summer.