The SIGGRAPH 2020 conference is about to take place, and Facebook researchers today published an article on “supersampling” technology. This algorithmic technology to improve image clarity based on artificial intelligence machine learning is not much worse than NVIDIA’s DLSS “deep learning oversampling” technology. However, this neural oversampling does not require specialized hardware or software to achieve, and the results are good, basically comparable to the dLSS effect.
“Close to our study, Nvidia recently released Deep Learning Oversampling (DLSS), which allows low-resolution images to be processed in real time via neural networks to amplify samples.
In this article, we’ll introduce a new way to make modern game engines easy to integrate without the need for special hardware or software, and can be applied to many existing software platforms, acceleration hardware, and displays.
We have observed that additional auxiliary information provided by motion vectors plays a key role in neural oversampling. The motion vector determines the geometric correspondence between pixels in a continuous frame. In other words, each motion vector points to a sub-pixel location where a visible surface point may appear near the previous frame. These values are usually estimated by computer vision methods, but such optical flow simulation algorithms are error-prone. In contrast, the rendering engine can directly produce dense motion vectors, so as to obtain reliable and rich input information, provided to the nerve supersampling, applied to the rendering picture content.
Our approach is based on the observations above, combining additional auxiliary information, using an innovative space-time neural network design to maximize picture and video quality while providing real-time computing performance. “
In the article, Facebook mentioned that neuro-ultrasampling technology could be applied to AR and VR programs to facilitate their Oculus platform. However, if the effect is good, it should be possible to apply to other 3D games. Of course, new technologies need to be tested by practice, and we look forward to seeing the actual performance of this neural supersampling.