Deep Fusion

Deep Fusion is an image processing feature of iOS developed by Apple to allow the Neural Engine in recent iPhone models to use machine learning to improve the quality of photos, especially when taken in moderate to low light conditions.

History
Deep Fusion was first announced on September 10, 2019 with the iPhone 11 series and required the Neural Engine in the Apple A13 Bionic chip or later. Apple senior VP Phil Schiller refered to it as "computational photography mad science" when demonstrating the technology. Enabled through the iOS 13.2 update, a shutter press takes a preliminary succession of quick photos, followed by a longer exposure, and "fuses" them to create a composite image with greater detail and less noise. The feature takes slightly more computational time than Smart HDR and is not supported in burst mode. Very low light conditions will automatically switch on Night Mode. Deep Fusion is also supported by all the models of the iPhone 12 and 13 series along with the 3rd-generation iPhone SE. Some professional photographers criticized that the built-in camera app does not support disabling Deep Fusion, even when taking ProRAW photos in iOS 14.3 or later. A solution is to download a professional 3rd-party app that does provide these controls.

Photonic Engine
On September 7, 2022, Apple introduced the iPhone 14 series with an updated version of this process marketed as the Photonic Engine. The computational photography of Deep Fusion is applied earlier in the imaging process for further improved visual quality. Photonic Engine is normally always on when taking photos in low light conditions, even when Night Mode is on. Previous iPhone models are not supported by Photonic Engine.

Articles

 * iPhone 11: What is Deep Fusion and how does it work? by Jonny Evans at Computerworld (2019-09-12)
 * How to use Deep Fusion with iPhone SE 3, iPhone 13, and more by Michael Potuck at 9to5Mac (2022-03-22)