According tomedia reports, Microsoft Research team Ziyu Wan, Zhang Bo and others have developed a new artificial intelligence-based algorithm that can restore severely degraded old photos through deep learning. Unlike traditional repair tasks that can be solved through regulatory learning, the degradation of real photos is complex, and the difference in the domain between synthetic images and real old photos prevents the network from being generalized.
The new technology proposes a new triple domain translation network using real photos and a large number of composite images. Specifically, they trained two variable-point self-encoders (VAEs) that convertold and clean photos into two hidden spaces, respectively. The transformation between these two hidden spaces can be learned by synthesizing the paired data.
Because domain spacing is closed in a compact hidden space, this translation can be well applied to real photos. To solve the problem of multiple degradation mixed in an old photo, they designed a global branch and a local non-local branch for structural defects such as scratches and dust points, as well as local branches such as noise and blur. Blending two branches in hidden space improves the ability to recover old photos from multiple defects. This method is superior to the existing method in terms of visual quality of image recovery.
Technical demo video
Unfortunately, Microsoft hasn’t provided a demo site to test the technology.