With the support of new functions such as “super wide angle lens”, “night mode” and “new generation of smart HDR”, mobile image has become the main highlight of this year’s iPhone 11 series. However, among the functions listed above, there is also a very practical hidden function, although the degree of concern is not high. At the conference, Apple’s senior vice president even called it “the crazy science of digital photography”, which is the “deep fusion” function.
What is “deep fusion”?
In an update to iOS 13.2, Apple describes this feature: “Deep Fusion uses the A13 bionic neural network engine to capture multiple images at different exposures, perform pixel-by-pixel analysis, and fuse the highest-quality parts of each image into textures and details. Photos with less noise in low- and medium-light environments. ”
While “deep fusion” sounds similar to “smart HDR” with the same multi-frame synthesis technique, there are differences in some respects. To understand this, you have to understand the principlebehind behind deep fusion:
1. Before opening the camera frame and ready to press the shutter, the iPhone captures four frames of short exposure and four frames of auxiliary screen before opening the camera frame and preparing to press the shutter;
2. When the shutter is pressed, the iPhone also takes a frame of long exposure;
3. Through machine learning, the A13 bionic chip combines the short exposure frame, the long exposure frame and the most detailed normal exposure auxiliary frame with the help of the nerve engine.
4. In addition, this process of processing synthesis is optimized for different pixels and elements of different processing types, such as hair, skin, background, etc., where high-weight elements with complex textures get the most detail from the captured frame footage and blend in with tones, colors, and so on from other auxiliary frames. The result is a more detailed “final photo”.
And unlike Smart HDR, the Deep Fusion feature is virtually invisible in the iPhone, and you can’t get any visual information about that feature in system settings or camera settings. That’s the main reason why the feature has such a low sense of presence.
In an interview with foreign media, Apple also explained why Deep Fusion uses a “non-visual” design: “Deep fusion doesn’t appear in cameras or photo programs, and it doesn’t appear in exIF data for photos.” Because we don’t want people to think about ‘how to get the best photos’, the iPhone will automatically help you solve the problem (that’s when to turn on deep fusion). ”
How does “deep fusion” work?
Deep Fusion currently only supports iPhone 11, iPhone 11 Pro, iPhone 11 Pro Max updated to iOS 13.2, and not every camera. In the case of the iPhone 11 Pro, its three lenses support the following features:
– Wide-angle lens: Smart HDR will be activated to shoot when the shooting environment is brighter (on a sunny day/large light ratio, etc.), when the lighting conditions in the shooting environment are poor (indoor/overcast indoor, etc.) will initiate deep fusion to shoot, and when the shooting environment is dim (night/dark indoor environment), Night mode will be activated;
– Telephoto lens: most of the time in deep fusion mode, good light conditions to start smart HDR, does not support night mode;
– Ultra-wide-angle lens: Most of the time shot with smart HDR, deep fusion and night mode are not supported
So if you want to activate the deep fusion function as much as possible, you can shoot with more telephoto lenses. For iPhone 11 users who are missing telephoto lenses, it’s time to “try their luck” in a wide-angle lens.
In addition, since Deep Fusion cannot be activated with the Super View Frame feature on, you’ll need to go to Settings and Cameras and Turn off The Super View Frame For Photos.